Jan 29 12:22:21.997035 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 29 12:22:21.997049 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:22:21.997055 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:22:21.997061 kernel: BIOS-provided physical RAM map: Jan 29 12:22:21.997065 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 29 12:22:21.997069 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 29 12:22:21.997073 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 29 12:22:21.997078 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 29 12:22:21.997082 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 29 12:22:21.997086 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Jan 29 12:22:21.997090 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Jan 29 12:22:21.997095 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Jan 29 12:22:21.997099 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Jan 29 12:22:21.997103 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 29 12:22:21.997108 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 29 12:22:21.997113 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 29 12:22:21.997118 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 29 12:22:21.997123 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 29 12:22:21.997128 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 29 12:22:21.997132 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 12:22:21.997136 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 29 12:22:21.997141 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 29 12:22:21.997145 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 12:22:21.997150 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 29 12:22:21.997154 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 29 12:22:21.997159 kernel: NX (Execute Disable) protection: active Jan 29 12:22:21.997163 kernel: APIC: Static calls initialized Jan 29 12:22:21.997168 kernel: SMBIOS 3.2.1 present. Jan 29 12:22:21.997174 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 29 12:22:21.997178 kernel: tsc: Detected 3400.000 MHz processor Jan 29 12:22:21.997183 kernel: tsc: Detected 3399.906 MHz TSC Jan 29 12:22:21.997188 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:22:21.997193 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:22:21.997198 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 29 12:22:21.997203 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 29 12:22:21.997207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:22:21.997212 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 29 12:22:21.997217 kernel: Using GB pages for direct mapping Jan 29 12:22:21.997222 kernel: ACPI: Early table checksum verification disabled Jan 29 12:22:21.997227 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 29 12:22:21.997234 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 29 12:22:21.997239 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 29 12:22:21.997244 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 29 12:22:21.997249 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 29 12:22:21.997255 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 29 12:22:21.997260 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 29 12:22:21.997265 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 29 12:22:21.997270 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 29 12:22:21.997275 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 29 12:22:21.997280 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 29 12:22:21.997285 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 29 12:22:21.997291 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 29 12:22:21.997296 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997301 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 29 12:22:21.997306 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 29 12:22:21.997311 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997316 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997321 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 29 12:22:21.997326 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 29 12:22:21.997331 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997336 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997341 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 29 12:22:21.997346 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 29 12:22:21.997351 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 29 12:22:21.997356 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 29 12:22:21.997362 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 29 12:22:21.997366 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 29 12:22:21.997371 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 29 12:22:21.997377 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 29 12:22:21.997382 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 29 12:22:21.997387 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 29 12:22:21.997392 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 29 12:22:21.997397 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 29 12:22:21.997402 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 29 12:22:21.997407 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 29 12:22:21.997412 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 29 12:22:21.997417 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 29 12:22:21.997423 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 29 12:22:21.997428 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 29 12:22:21.997433 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 29 12:22:21.997438 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 29 12:22:21.997443 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 29 12:22:21.997448 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 29 12:22:21.997453 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 29 12:22:21.997458 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 29 12:22:21.997462 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 29 12:22:21.997468 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 29 12:22:21.997473 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 29 12:22:21.997478 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 29 12:22:21.997483 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 29 12:22:21.997488 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 29 12:22:21.997493 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 29 12:22:21.997498 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 29 12:22:21.997503 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 29 12:22:21.997508 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 29 12:22:21.997513 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 29 12:22:21.997518 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 29 12:22:21.997523 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 29 12:22:21.997528 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 29 12:22:21.997533 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 29 12:22:21.997542 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 29 12:22:21.997547 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 29 12:22:21.997552 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 29 12:22:21.997557 kernel: No NUMA configuration found Jan 29 12:22:21.997562 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 29 12:22:21.997568 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 29 12:22:21.997573 kernel: Zone ranges: Jan 29 12:22:21.997578 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:22:21.997583 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:22:21.997588 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 29 12:22:21.997593 kernel: Movable zone start for each node Jan 29 12:22:21.997598 kernel: Early memory node ranges Jan 29 12:22:21.997603 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 29 12:22:21.997608 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 29 12:22:21.997614 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Jan 29 12:22:21.997619 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Jan 29 12:22:21.997624 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 29 12:22:21.997629 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 29 12:22:21.997638 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 29 12:22:21.997644 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 29 12:22:21.997649 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:22:21.997655 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 29 12:22:21.997661 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 29 12:22:21.997666 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 29 12:22:21.997672 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 29 12:22:21.997677 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 29 12:22:21.997683 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 29 12:22:21.997688 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 29 12:22:21.997693 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 29 12:22:21.997699 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 12:22:21.997704 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 12:22:21.997710 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 12:22:21.997716 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 12:22:21.997721 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 12:22:21.997726 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 12:22:21.997731 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 12:22:21.997737 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 12:22:21.997742 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 12:22:21.997747 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 12:22:21.997752 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 12:22:21.997758 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 12:22:21.997764 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 12:22:21.997769 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 12:22:21.997774 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 12:22:21.997779 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 12:22:21.997785 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 29 12:22:21.997790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:22:21.997795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:22:21.997801 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:22:21.997806 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:22:21.997812 kernel: TSC deadline timer available Jan 29 12:22:21.997818 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 29 12:22:21.997823 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 29 12:22:21.997829 kernel: Booting paravirtualized kernel on bare hardware Jan 29 12:22:21.997834 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:22:21.997840 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 12:22:21.997845 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 12:22:21.997850 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 12:22:21.997856 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 12:22:21.997862 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:22:21.997868 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:22:21.997873 kernel: random: crng init done Jan 29 12:22:21.997879 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 29 12:22:21.997884 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 29 12:22:21.997889 kernel: Fallback order for Node 0: 0 Jan 29 12:22:21.997895 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 29 12:22:21.997901 kernel: Policy zone: Normal Jan 29 12:22:21.997906 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:22:21.997912 kernel: software IO TLB: area num 16. Jan 29 12:22:21.997917 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 732420K reserved, 0K cma-reserved) Jan 29 12:22:21.997923 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 12:22:21.997928 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:22:21.997933 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:22:21.997939 kernel: Dynamic Preempt: voluntary Jan 29 12:22:21.997944 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:22:21.997951 kernel: rcu: RCU event tracing is enabled. Jan 29 12:22:21.997956 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 12:22:21.997962 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:22:21.997967 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:22:21.997972 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:22:21.997978 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:22:21.997983 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 12:22:21.997988 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 29 12:22:21.997994 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:22:21.997999 kernel: Console: colour dummy device 80x25 Jan 29 12:22:21.998005 kernel: printk: console [tty0] enabled Jan 29 12:22:21.998011 kernel: printk: console [ttyS1] enabled Jan 29 12:22:21.998016 kernel: ACPI: Core revision 20230628 Jan 29 12:22:21.998021 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 29 12:22:21.998027 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:22:21.998032 kernel: DMAR: Host address width 39 Jan 29 12:22:21.998037 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 29 12:22:21.998043 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 29 12:22:21.998048 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 29 12:22:21.998054 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 29 12:22:21.998060 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 29 12:22:21.998065 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 29 12:22:21.998070 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 29 12:22:21.998076 kernel: x2apic enabled Jan 29 12:22:21.998081 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 29 12:22:21.998087 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 29 12:22:21.998092 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 29 12:22:21.998097 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 29 12:22:21.998104 kernel: process: using mwait in idle threads Jan 29 12:22:21.998109 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:22:21.998114 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:22:21.998119 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:22:21.998125 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 12:22:21.998130 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 12:22:21.998135 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 12:22:21.998141 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:22:21.998146 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 12:22:21.998151 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 12:22:21.998156 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:22:21.998163 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:22:21.998168 kernel: TAA: Mitigation: TSX disabled Jan 29 12:22:21.998174 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 29 12:22:21.998179 kernel: SRBDS: Mitigation: Microcode Jan 29 12:22:21.998184 kernel: GDS: Mitigation: Microcode Jan 29 12:22:21.998189 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:22:21.998195 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:22:21.998200 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:22:21.998205 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 12:22:21.998211 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 12:22:21.998216 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:22:21.998222 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 12:22:21.998227 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 12:22:21.998233 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 29 12:22:21.998238 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:22:21.998243 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:22:21.998249 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:22:21.998254 kernel: landlock: Up and running. Jan 29 12:22:21.998259 kernel: SELinux: Initializing. Jan 29 12:22:21.998265 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:22:21.998270 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:22:21.998275 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 12:22:21.998282 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:22:21.998287 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:22:21.998292 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:22:21.998298 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 29 12:22:21.998303 kernel: ... version: 4 Jan 29 12:22:21.998309 kernel: ... bit width: 48 Jan 29 12:22:21.998314 kernel: ... generic registers: 4 Jan 29 12:22:21.998324 kernel: ... value mask: 0000ffffffffffff Jan 29 12:22:21.998337 kernel: ... max period: 00007fffffffffff Jan 29 12:22:21.998350 kernel: ... fixed-purpose events: 3 Jan 29 12:22:21.998360 kernel: ... event mask: 000000070000000f Jan 29 12:22:21.998373 kernel: signal: max sigframe size: 2032 Jan 29 12:22:21.998384 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 29 12:22:21.998390 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:22:21.998396 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:22:21.998401 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 29 12:22:21.998407 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:22:21.998412 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:22:21.998418 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 29 12:22:21.998424 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:22:21.998430 kernel: smp: Brought up 1 node, 16 CPUs Jan 29 12:22:21.998435 kernel: smpboot: Max logical packages: 1 Jan 29 12:22:21.998440 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 29 12:22:21.998446 kernel: devtmpfs: initialized Jan 29 12:22:21.998451 kernel: x86/mm: Memory block size: 128MB Jan 29 12:22:21.998456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Jan 29 12:22:21.998462 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 29 12:22:21.998468 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:22:21.998474 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 12:22:21.998479 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:22:21.998484 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:22:21.998490 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:22:21.998495 kernel: audit: type=2000 audit(1738153336.039:1): state=initialized audit_enabled=0 res=1 Jan 29 12:22:21.998500 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:22:21.998506 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:22:21.998511 kernel: cpuidle: using governor menu Jan 29 12:22:21.998517 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:22:21.998523 kernel: dca service started, version 1.12.1 Jan 29 12:22:21.998528 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 12:22:21.998533 kernel: PCI: Using configuration type 1 for base access Jan 29 12:22:21.998541 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 29 12:22:21.998546 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:22:21.998552 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:22:21.998557 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:22:21.998564 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:22:21.998569 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:22:21.998574 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:22:21.998580 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:22:21.998585 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:22:21.998590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:22:21.998596 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 29 12:22:21.998601 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998606 kernel: ACPI: SSDT 0xFFFF9B8F01601000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 29 12:22:21.998613 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998618 kernel: ACPI: SSDT 0xFFFF9B8F015FF800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 29 12:22:21.998623 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998629 kernel: ACPI: SSDT 0xFFFF9B8F015E4700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 29 12:22:21.998634 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998639 kernel: ACPI: SSDT 0xFFFF9B8F015F8000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 29 12:22:21.998644 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998650 kernel: ACPI: SSDT 0xFFFF9B8F01608000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 29 12:22:21.998655 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998660 kernel: ACPI: SSDT 0xFFFF9B8F01606400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 29 12:22:21.998667 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 29 12:22:21.998672 kernel: ACPI: Interpreter enabled Jan 29 12:22:21.998677 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:22:21.998683 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:22:21.998688 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 29 12:22:21.998693 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 29 12:22:21.998698 kernel: HEST: Table parsing has been initialized. Jan 29 12:22:21.998704 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 29 12:22:21.998709 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:22:21.998715 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:22:21.998721 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 29 12:22:21.998726 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 29 12:22:21.998732 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 29 12:22:21.998737 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 29 12:22:21.998743 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 29 12:22:21.998748 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 29 12:22:21.998753 kernel: ACPI: \_TZ_.FN00: New power resource Jan 29 12:22:21.998759 kernel: ACPI: \_TZ_.FN01: New power resource Jan 29 12:22:21.998765 kernel: ACPI: \_TZ_.FN02: New power resource Jan 29 12:22:21.998770 kernel: ACPI: \_TZ_.FN03: New power resource Jan 29 12:22:21.998776 kernel: ACPI: \_TZ_.FN04: New power resource Jan 29 12:22:21.998781 kernel: ACPI: \PIN_: New power resource Jan 29 12:22:21.998786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 29 12:22:21.998857 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:22:21.998911 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 29 12:22:21.998959 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 29 12:22:21.998969 kernel: PCI host bridge to bus 0000:00 Jan 29 12:22:21.999018 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:22:21.999062 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:22:21.999105 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:22:21.999147 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 29 12:22:21.999190 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 29 12:22:21.999232 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 29 12:22:21.999295 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 29 12:22:21.999352 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 29 12:22:21.999401 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:21.999454 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 29 12:22:21.999503 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 29 12:22:21.999559 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 29 12:22:21.999611 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 29 12:22:21.999664 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 29 12:22:21.999734 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 29 12:22:21.999808 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 29 12:22:21.999862 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 29 12:22:21.999911 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 29 12:22:21.999961 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 29 12:22:22.000013 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 29 12:22:22.000062 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 12:22:22.000114 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 29 12:22:22.000164 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 12:22:22.000215 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 29 12:22:22.000266 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 29 12:22:22.000315 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 29 12:22:22.000374 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 29 12:22:22.000425 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 29 12:22:22.000473 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 29 12:22:22.000524 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 29 12:22:22.000578 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 29 12:22:22.000628 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 29 12:22:22.000679 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 29 12:22:22.000727 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 29 12:22:22.000775 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 29 12:22:22.000822 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 29 12:22:22.000870 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 29 12:22:22.000917 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 29 12:22:22.000969 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 29 12:22:22.001017 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 29 12:22:22.001071 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 29 12:22:22.001178 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001234 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 29 12:22:22.001284 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001336 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 29 12:22:22.001386 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001438 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 29 12:22:22.001489 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001575 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 29 12:22:22.001627 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001680 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 29 12:22:22.001729 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 12:22:22.001784 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 29 12:22:22.001836 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 29 12:22:22.001888 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 29 12:22:22.001936 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 29 12:22:22.001988 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 29 12:22:22.002036 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 29 12:22:22.002093 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 29 12:22:22.002143 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 29 12:22:22.002196 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 29 12:22:22.002246 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 29 12:22:22.002296 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 12:22:22.002347 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 12:22:22.002401 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 29 12:22:22.002452 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 29 12:22:22.002502 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 29 12:22:22.002558 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 29 12:22:22.002609 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 12:22:22.002659 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 12:22:22.002710 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 12:22:22.002758 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 29 12:22:22.002807 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 12:22:22.002856 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 29 12:22:22.002912 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 29 12:22:22.002964 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 29 12:22:22.003015 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 29 12:22:22.003065 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 29 12:22:22.003115 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 29 12:22:22.003165 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.003215 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 29 12:22:22.003264 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 12:22:22.003315 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 29 12:22:22.003372 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 29 12:22:22.003423 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 29 12:22:22.003474 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 29 12:22:22.003524 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 29 12:22:22.003579 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 29 12:22:22.003630 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.003682 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 29 12:22:22.003732 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 12:22:22.003784 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 29 12:22:22.003834 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 29 12:22:22.003889 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 29 12:22:22.003940 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 29 12:22:22.003991 kernel: pci 0000:06:00.0: supports D1 D2 Jan 29 12:22:22.004042 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 12:22:22.004094 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 29 12:22:22.004143 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.004192 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.004244 kernel: pci_bus 0000:07: extended config space not accessible Jan 29 12:22:22.004302 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 29 12:22:22.004354 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 29 12:22:22.004407 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 29 12:22:22.004461 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 29 12:22:22.004514 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:22:22.004605 kernel: pci 0000:07:00.0: supports D1 D2 Jan 29 12:22:22.004657 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 12:22:22.004709 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 29 12:22:22.004759 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.004809 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.004819 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 29 12:22:22.004825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 29 12:22:22.004831 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 29 12:22:22.004837 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 29 12:22:22.004842 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 29 12:22:22.004848 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 29 12:22:22.004854 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 29 12:22:22.004859 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 29 12:22:22.004865 kernel: iommu: Default domain type: Translated Jan 29 12:22:22.004872 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:22:22.004877 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:22:22.004883 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:22:22.004889 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 29 12:22:22.004894 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Jan 29 12:22:22.004900 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 29 12:22:22.004905 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 29 12:22:22.004911 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 29 12:22:22.004916 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 29 12:22:22.004969 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 29 12:22:22.005021 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 29 12:22:22.005074 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:22:22.005082 kernel: vgaarb: loaded Jan 29 12:22:22.005088 kernel: clocksource: Switched to clocksource tsc-early Jan 29 12:22:22.005094 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:22:22.005099 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:22:22.005105 kernel: pnp: PnP ACPI init Jan 29 12:22:22.005153 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 29 12:22:22.005206 kernel: pnp 00:02: [dma 0 disabled] Jan 29 12:22:22.005256 kernel: pnp 00:03: [dma 0 disabled] Jan 29 12:22:22.005305 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 29 12:22:22.005349 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 29 12:22:22.005397 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 29 12:22:22.005445 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 29 12:22:22.005492 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 29 12:22:22.005540 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 29 12:22:22.005616 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 29 12:22:22.005664 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 29 12:22:22.005709 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 29 12:22:22.005754 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 29 12:22:22.005798 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 29 12:22:22.005849 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 29 12:22:22.005893 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 29 12:22:22.005938 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 29 12:22:22.005981 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 29 12:22:22.006026 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 29 12:22:22.006070 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 29 12:22:22.006116 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 29 12:22:22.006165 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 29 12:22:22.006174 kernel: pnp: PnP ACPI: found 10 devices Jan 29 12:22:22.006180 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:22:22.006186 kernel: NET: Registered PF_INET protocol family Jan 29 12:22:22.006191 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:22:22.006197 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 29 12:22:22.006203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:22:22.006209 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:22:22.006216 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:22:22.006222 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 29 12:22:22.006228 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:22:22.006233 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:22:22.006239 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:22:22.006245 kernel: NET: Registered PF_XDP protocol family Jan 29 12:22:22.006294 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 29 12:22:22.006342 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 29 12:22:22.006394 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 29 12:22:22.006446 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006495 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006565 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006629 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006680 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 12:22:22.006729 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 29 12:22:22.006779 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 12:22:22.006830 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 29 12:22:22.006879 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 29 12:22:22.006927 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 12:22:22.006976 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 29 12:22:22.007025 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 29 12:22:22.007076 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 12:22:22.007125 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 29 12:22:22.007174 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 29 12:22:22.007225 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 29 12:22:22.007273 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.007323 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.007372 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 29 12:22:22.007421 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.007470 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.007518 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 29 12:22:22.007598 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:22:22.007643 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:22:22.007685 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:22:22.007729 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 29 12:22:22.007771 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 29 12:22:22.007823 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 29 12:22:22.007871 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 12:22:22.007921 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 29 12:22:22.007966 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 29 12:22:22.008015 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 12:22:22.008060 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 29 12:22:22.008109 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 29 12:22:22.008157 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 29 12:22:22.008204 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 29 12:22:22.008251 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 29 12:22:22.008259 kernel: PCI: CLS 64 bytes, default 64 Jan 29 12:22:22.008265 kernel: DMAR: No ATSR found Jan 29 12:22:22.008271 kernel: DMAR: No SATC found Jan 29 12:22:22.008277 kernel: DMAR: dmar0: Using Queued invalidation Jan 29 12:22:22.008325 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 29 12:22:22.008378 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 29 12:22:22.008426 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 29 12:22:22.008475 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 29 12:22:22.008524 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 29 12:22:22.008608 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 29 12:22:22.008656 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 29 12:22:22.008704 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 29 12:22:22.008753 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 29 12:22:22.008801 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 29 12:22:22.008852 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 29 12:22:22.008900 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 29 12:22:22.008949 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 29 12:22:22.008997 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 29 12:22:22.009045 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 29 12:22:22.009093 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 29 12:22:22.009142 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 29 12:22:22.009190 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 29 12:22:22.009241 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 29 12:22:22.009290 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 29 12:22:22.009339 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 29 12:22:22.009388 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 29 12:22:22.009438 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 29 12:22:22.009489 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 29 12:22:22.009540 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 29 12:22:22.009627 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 29 12:22:22.009680 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 29 12:22:22.009689 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 29 12:22:22.009695 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:22:22.009701 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 29 12:22:22.009706 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 29 12:22:22.009712 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 29 12:22:22.009718 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 29 12:22:22.009723 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 29 12:22:22.009776 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 29 12:22:22.009786 kernel: Initialise system trusted keyrings Jan 29 12:22:22.009792 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 29 12:22:22.009798 kernel: Key type asymmetric registered Jan 29 12:22:22.009803 kernel: Asymmetric key parser 'x509' registered Jan 29 12:22:22.009809 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:22:22.009815 kernel: io scheduler mq-deadline registered Jan 29 12:22:22.009820 kernel: io scheduler kyber registered Jan 29 12:22:22.009826 kernel: io scheduler bfq registered Jan 29 12:22:22.009874 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 29 12:22:22.009924 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 29 12:22:22.009972 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 29 12:22:22.010022 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 29 12:22:22.010071 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 29 12:22:22.010119 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 29 12:22:22.010173 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 29 12:22:22.010183 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 29 12:22:22.010189 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 29 12:22:22.010195 kernel: pstore: Using crash dump compression: deflate Jan 29 12:22:22.010201 kernel: pstore: Registered erst as persistent store backend Jan 29 12:22:22.010207 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:22:22.010212 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:22:22.010218 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:22:22.010224 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:22:22.010230 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 29 12:22:22.010279 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 29 12:22:22.010287 kernel: i8042: PNP: No PS/2 controller found. Jan 29 12:22:22.010331 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 29 12:22:22.010376 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 29 12:22:22.010421 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-29T12:22:20 UTC (1738153340) Jan 29 12:22:22.010465 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 29 12:22:22.010473 kernel: intel_pstate: Intel P-state driver initializing Jan 29 12:22:22.010481 kernel: intel_pstate: Disabling energy efficiency optimization Jan 29 12:22:22.010487 kernel: intel_pstate: HWP enabled Jan 29 12:22:22.010492 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 29 12:22:22.010498 kernel: vesafb: scrolling: redraw Jan 29 12:22:22.010504 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 29 12:22:22.010510 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000d92ef0da, using 768k, total 768k Jan 29 12:22:22.010515 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:22:22.010521 kernel: fb0: VESA VGA frame buffer device Jan 29 12:22:22.010527 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:22:22.010533 kernel: Segment Routing with IPv6 Jan 29 12:22:22.010542 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:22:22.010548 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:22:22.010554 kernel: Key type dns_resolver registered Jan 29 12:22:22.010580 kernel: microcode: Microcode Update Driver: v2.2. Jan 29 12:22:22.010585 kernel: IPI shorthand broadcast: enabled Jan 29 12:22:22.010607 kernel: sched_clock: Marking stable (2477424038, 1385639505)->(4406231711, -543168168) Jan 29 12:22:22.010613 kernel: registered taskstats version 1 Jan 29 12:22:22.010618 kernel: Loading compiled-in X.509 certificates Jan 29 12:22:22.010624 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:22:22.010631 kernel: Key type .fscrypt registered Jan 29 12:22:22.010636 kernel: Key type fscrypt-provisioning registered Jan 29 12:22:22.010642 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:22:22.010648 kernel: ima: No architecture policies found Jan 29 12:22:22.010653 kernel: clk: Disabling unused clocks Jan 29 12:22:22.010659 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:22:22.010665 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:22:22.010670 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:22:22.010677 kernel: Run /init as init process Jan 29 12:22:22.010683 kernel: with arguments: Jan 29 12:22:22.010689 kernel: /init Jan 29 12:22:22.010694 kernel: with environment: Jan 29 12:22:22.010700 kernel: HOME=/ Jan 29 12:22:22.010705 kernel: TERM=linux Jan 29 12:22:22.010711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:22:22.010718 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:22:22.010726 systemd[1]: Detected architecture x86-64. Jan 29 12:22:22.010732 systemd[1]: Running in initrd. Jan 29 12:22:22.010738 systemd[1]: No hostname configured, using default hostname. Jan 29 12:22:22.010743 systemd[1]: Hostname set to . Jan 29 12:22:22.010749 systemd[1]: Initializing machine ID from random generator. Jan 29 12:22:22.010755 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:22:22.010761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:22:22.010767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:22:22.010774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:22:22.010780 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:22:22.010786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:22:22.010792 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:22:22.010799 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:22:22.010805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:22:22.010811 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 29 12:22:22.010818 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 29 12:22:22.010823 kernel: clocksource: Switched to clocksource tsc Jan 29 12:22:22.010829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:22:22.010835 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:22:22.010841 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:22:22.010847 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:22:22.010853 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:22:22.010859 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:22:22.010866 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:22:22.010872 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:22:22.010878 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:22:22.010884 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:22:22.010890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:22:22.010896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:22:22.010902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:22:22.010908 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:22:22.010913 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:22:22.010921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:22:22.010927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:22:22.010932 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:22:22.010938 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:22:22.010955 systemd-journald[264]: Collecting audit messages is disabled. Jan 29 12:22:22.010970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:22:22.010976 systemd-journald[264]: Journal started Jan 29 12:22:22.010990 systemd-journald[264]: Runtime Journal (/run/log/journal/df7db472609545f3897095c12bdb6813) is 8.0M, max 639.9M, 631.9M free. Jan 29 12:22:22.033447 systemd-modules-load[265]: Inserted module 'overlay' Jan 29 12:22:22.061647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:22.107571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:22:22.107588 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:22:22.126466 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:22:22.126568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:22:22.126655 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:22:22.127593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:22:22.144366 systemd-modules-load[265]: Inserted module 'br_netfilter' Jan 29 12:22:22.144540 kernel: Bridge firewalling registered Jan 29 12:22:22.144806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:22:22.214102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:22:22.234352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:22.263920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:22:22.274928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:22:22.318782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:22:22.319222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:22:22.349743 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:22:22.358141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:22:22.359884 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:22:22.363945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:22:22.370765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:22.379297 systemd-resolved[295]: Positive Trust Anchors: Jan 29 12:22:22.379304 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:22:22.379329 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:22:22.380921 systemd-resolved[295]: Defaulting to hostname 'linux'. Jan 29 12:22:22.382851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:22:22.404756 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:22:22.424742 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:22:22.534767 dracut-cmdline[307]: dracut-dracut-053 Jan 29 12:22:22.541834 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:22:22.738568 kernel: SCSI subsystem initialized Jan 29 12:22:22.761543 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:22:22.784574 kernel: iscsi: registered transport (tcp) Jan 29 12:22:22.816864 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:22:22.816882 kernel: QLogic iSCSI HBA Driver Jan 29 12:22:22.850800 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:22:22.872832 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:22:22.929582 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:22:22.929595 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:22:22.949432 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:22:23.009618 kernel: raid6: avx2x4 gen() 51398 MB/s Jan 29 12:22:23.041577 kernel: raid6: avx2x2 gen() 52407 MB/s Jan 29 12:22:23.078157 kernel: raid6: avx2x1 gen() 44460 MB/s Jan 29 12:22:23.078175 kernel: raid6: using algorithm avx2x2 gen() 52407 MB/s Jan 29 12:22:23.126149 kernel: raid6: .... xor() 30825 MB/s, rmw enabled Jan 29 12:22:23.126167 kernel: raid6: using avx2x2 recovery algorithm Jan 29 12:22:23.167570 kernel: xor: automatically using best checksumming function avx Jan 29 12:22:23.284602 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:22:23.289844 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:22:23.321811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:22:23.328592 systemd-udevd[493]: Using default interface naming scheme 'v255'. Jan 29 12:22:23.333683 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:22:23.365732 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:22:23.414849 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Jan 29 12:22:23.432243 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:22:23.456833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:22:23.516321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:22:23.561147 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 12:22:23.561163 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 12:22:23.531670 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:22:23.577542 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:22:23.577371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:22:23.610478 kernel: PTP clock support registered Jan 29 12:22:23.610499 kernel: ACPI: bus type USB registered Jan 29 12:22:23.610517 kernel: usbcore: registered new interface driver usbfs Jan 29 12:22:23.577528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:23.655646 kernel: usbcore: registered new interface driver hub Jan 29 12:22:23.655661 kernel: usbcore: registered new device driver usb Jan 29 12:22:23.655602 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:22:23.699895 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:22:23.699911 kernel: libata version 3.00 loaded. Jan 29 12:22:23.699919 kernel: AES CTR mode by8 optimization enabled Jan 29 12:22:23.699927 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 29 12:22:23.655630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:22:23.724966 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 29 12:22:23.655757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:24.395873 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 12:22:24.396047 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 29 12:22:24.396119 kernel: pps pps0: new PPS source ptp0 Jan 29 12:22:24.396183 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 29 12:22:24.396246 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 29 12:22:24.396313 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 12:22:24.396373 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 12:22:24.396433 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 29 12:22:24.396493 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:54 Jan 29 12:22:24.396611 kernel: ahci 0000:00:17.0: version 3.0 Jan 29 12:22:24.396676 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 29 12:22:24.396735 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 29 12:22:24.396794 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 29 12:22:24.396852 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 29 12:22:24.396912 kernel: hub 1-0:1.0: USB hub found Jan 29 12:22:24.396978 kernel: scsi host0: ahci Jan 29 12:22:24.397040 kernel: scsi host1: ahci Jan 29 12:22:24.397101 kernel: scsi host2: ahci Jan 29 12:22:24.397158 kernel: scsi host3: ahci Jan 29 12:22:24.397214 kernel: scsi host4: ahci Jan 29 12:22:24.397272 kernel: scsi host5: ahci Jan 29 12:22:24.397328 kernel: scsi host6: ahci Jan 29 12:22:24.397388 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 133 Jan 29 12:22:24.397397 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 133 Jan 29 12:22:24.397405 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 133 Jan 29 12:22:24.397412 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 133 Jan 29 12:22:24.397419 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 133 Jan 29 12:22:24.397426 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 133 Jan 29 12:22:24.397433 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 133 Jan 29 12:22:24.397440 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 12:22:24.397505 kernel: pps pps1: new PPS source ptp1 Jan 29 12:22:24.397589 kernel: hub 1-0:1.0: 16 ports detected Jan 29 12:22:24.397668 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 29 12:22:24.397733 kernel: hub 2-0:1.0: USB hub found Jan 29 12:22:24.397796 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 12:22:24.397858 kernel: hub 2-0:1.0: 10 ports detected Jan 29 12:22:24.397915 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:55 Jan 29 12:22:24.397978 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 29 12:22:24.398047 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 29 12:22:24.398109 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.398118 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 12:22:24.398179 kernel: hub 1-14:1.0: USB hub found Jan 29 12:22:24.398243 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 12:22:24.398252 kernel: hub 1-14:1.0: 4 ports detected Jan 29 12:22:24.398312 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.398320 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 12:22:24.398328 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 12:22:24.398335 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:23.755691 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:24.455584 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 12:22:24.455600 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.455609 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.471573 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 12:22:24.471591 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 12:22:24.497309 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 29 12:22:25.087809 kernel: ata1.00: Features: NCQ-prio Jan 29 12:22:25.087825 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 12:22:25.087915 kernel: ata2.00: Features: NCQ-prio Jan 29 12:22:25.087929 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 29 12:22:25.088051 kernel: ata1.00: configured for UDMA/133 Jan 29 12:22:25.088062 kernel: ata2.00: configured for UDMA/133 Jan 29 12:22:25.088071 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 12:22:25.088151 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 12:22:25.088224 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 29 12:22:25.088302 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 12:22:25.088315 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 29 12:22:25.088390 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.088400 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 12:22:25.088470 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 12:22:25.088542 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 12:22:25.088657 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 12:22:25.088727 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 29 12:22:25.088794 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 12:22:25.088863 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 29 12:22:25.088932 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 12:22:25.088943 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 12:22:25.089008 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:22:25.089019 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 29 12:22:25.089086 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 12:22:25.089160 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 29 12:22:25.089227 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 29 12:22:25.089302 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 29 12:22:25.089371 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 12:22:25.089439 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 29 12:22:25.089507 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.089517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:22:25.089527 kernel: GPT:9289727 != 937703087 Jan 29 12:22:25.089538 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:22:25.089548 kernel: GPT:9289727 != 937703087 Jan 29 12:22:25.089559 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:22:25.089597 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.089606 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 29 12:22:25.089688 kernel: usbcore: registered new interface driver usbhid Jan 29 12:22:25.089698 kernel: usbhid: USB HID core driver Jan 29 12:22:25.089707 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (582) Jan 29 12:22:25.089717 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:22:25.089789 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (585) Jan 29 12:22:25.089801 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 29 12:22:25.776590 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 29 12:22:25.776634 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 12:22:25.776920 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 29 12:22:25.777177 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 29 12:22:25.777205 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 29 12:22:25.777455 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.777514 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.777573 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.777617 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.777658 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.777700 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 12:22:25.778027 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.778054 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 29 12:22:25.778284 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:22:24.509770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:25.813032 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 29 12:22:25.813118 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 29 12:22:24.561914 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:22:24.603708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:22:24.619511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:22:24.619548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:22:24.644688 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:22:24.945800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:22:25.886682 disk-uuid[704]: Primary Header is updated. Jan 29 12:22:25.886682 disk-uuid[704]: Secondary Entries is updated. Jan 29 12:22:25.886682 disk-uuid[704]: Secondary Header is updated. Jan 29 12:22:24.945849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:25.080694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:25.213757 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:22:25.246966 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 29 12:22:25.283308 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 29 12:22:25.311742 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 12:22:25.333572 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 12:22:25.348319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 29 12:22:25.359733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:25.380613 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:22:25.397060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:22:25.428970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:26.481304 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:26.502325 disk-uuid[705]: The operation has completed successfully. Jan 29 12:22:26.510756 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:26.534498 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:22:26.534550 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:22:26.583810 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:22:26.609715 sh[745]: Success Jan 29 12:22:26.618623 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:22:26.655843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:22:26.675421 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:22:26.697891 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:22:26.739987 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:22:26.740007 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:26.762394 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:22:26.782412 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:22:26.801360 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:22:26.840583 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:22:26.842827 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:22:26.853036 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:22:26.865878 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:22:26.914558 kernel: BTRFS info (device sdb6): first mount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:26.914609 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:26.914617 kernel: BTRFS info (device sdb6): using free space tree Jan 29 12:22:26.922073 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:22:26.992452 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 29 12:22:26.992463 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 29 12:22:27.016581 kernel: BTRFS info (device sdb6): last unmount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:27.024155 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:22:27.043732 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:22:27.070586 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:22:27.094688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:22:27.101853 ignition[833]: Ignition 2.19.0 Jan 29 12:22:27.101859 ignition[833]: Stage: fetch-offline Jan 29 12:22:27.104034 unknown[833]: fetched base config from "system" Jan 29 12:22:27.101877 ignition[833]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:27.104038 unknown[833]: fetched user config from "system" Jan 29 12:22:27.101883 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:27.104905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:22:27.101939 ignition[833]: parsed url from cmdline: "" Jan 29 12:22:27.136792 systemd-networkd[929]: lo: Link UP Jan 29 12:22:27.101941 ignition[833]: no config URL provided Jan 29 12:22:27.136794 systemd-networkd[929]: lo: Gained carrier Jan 29 12:22:27.101944 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:22:27.139836 systemd-networkd[929]: Enumeration completed Jan 29 12:22:27.101966 ignition[833]: parsing config with SHA512: e68022bdebc88343ee1a65c5fbcf48196ea4c9ef41629399b60951a22552c1e848ec03eafd5b5351a51bb888c3077b1b135b27bb0d5be38c0b0085898d8132d7 Jan 29 12:22:27.139900 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:22:27.104253 ignition[833]: fetch-offline: fetch-offline passed Jan 29 12:22:27.140688 systemd-networkd[929]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.104256 ignition[833]: POST message to Packet Timeline Jan 29 12:22:27.158821 systemd[1]: Reached target network.target - Network. Jan 29 12:22:27.104259 ignition[833]: POST Status error: resource requires networking Jan 29 12:22:27.168749 systemd-networkd[929]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.104295 ignition[833]: Ignition finished successfully Jan 29 12:22:27.174685 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:22:27.214933 ignition[943]: Ignition 2.19.0 Jan 29 12:22:27.187766 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:22:27.214949 ignition[943]: Stage: kargs Jan 29 12:22:27.198787 systemd-networkd[929]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.388664 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 29 12:22:27.215350 ignition[943]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:27.377956 systemd-networkd[929]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.215377 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:27.216997 ignition[943]: kargs: kargs passed Jan 29 12:22:27.217005 ignition[943]: POST message to Packet Timeline Jan 29 12:22:27.217028 ignition[943]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:27.218279 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41932->[::1]:53: read: connection refused Jan 29 12:22:27.418652 ignition[943]: GET https://metadata.packet.net/metadata: attempt #2 Jan 29 12:22:27.418929 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37451->[::1]:53: read: connection refused Jan 29 12:22:27.569664 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 29 12:22:27.570380 systemd-networkd[929]: eno1: Link UP Jan 29 12:22:27.570591 systemd-networkd[929]: eno2: Link UP Jan 29 12:22:27.570747 systemd-networkd[929]: enp1s0f0np0: Link UP Jan 29 12:22:27.570919 systemd-networkd[929]: enp1s0f0np0: Gained carrier Jan 29 12:22:27.583791 systemd-networkd[929]: enp1s0f1np1: Link UP Jan 29 12:22:27.619727 systemd-networkd[929]: enp1s0f0np0: DHCPv4 address 139.178.70.85/31, gateway 139.178.70.84 acquired from 145.40.83.140 Jan 29 12:22:27.819349 ignition[943]: GET https://metadata.packet.net/metadata: attempt #3 Jan 29 12:22:27.820502 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34631->[::1]:53: read: connection refused Jan 29 12:22:28.432238 systemd-networkd[929]: enp1s0f1np1: Gained carrier Jan 29 12:22:28.621035 ignition[943]: GET https://metadata.packet.net/metadata: attempt #4 Jan 29 12:22:28.622179 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35481->[::1]:53: read: connection refused Jan 29 12:22:28.752160 systemd-networkd[929]: enp1s0f0np0: Gained IPv6LL Jan 29 12:22:30.160157 systemd-networkd[929]: enp1s0f1np1: Gained IPv6LL Jan 29 12:22:30.223782 ignition[943]: GET https://metadata.packet.net/metadata: attempt #5 Jan 29 12:22:30.224814 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40879->[::1]:53: read: connection refused Jan 29 12:22:33.427575 ignition[943]: GET https://metadata.packet.net/metadata: attempt #6 Jan 29 12:22:35.349860 ignition[943]: GET result: OK Jan 29 12:22:35.717435 ignition[943]: Ignition finished successfully Jan 29 12:22:35.721429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:22:35.756141 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:22:35.784445 ignition[961]: Ignition 2.19.0 Jan 29 12:22:35.784457 ignition[961]: Stage: disks Jan 29 12:22:35.784743 ignition[961]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:35.784760 ignition[961]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:35.786154 ignition[961]: disks: disks passed Jan 29 12:22:35.786161 ignition[961]: POST message to Packet Timeline Jan 29 12:22:35.786181 ignition[961]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:37.165893 ignition[961]: GET result: OK Jan 29 12:22:37.981378 ignition[961]: Ignition finished successfully Jan 29 12:22:37.984844 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:22:38.000882 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:22:38.018822 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:22:38.039896 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:22:38.060883 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:22:38.080876 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:22:38.110016 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:22:38.134530 systemd-fsck[981]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:22:38.144950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:22:38.145518 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:22:38.271356 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:22:38.286790 kernel: EXT4-fs (sdb9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:22:38.271616 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:22:38.308797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:22:38.317975 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:22:38.442837 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (990) Jan 29 12:22:38.442853 kernel: BTRFS info (device sdb6): first mount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:38.442862 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:38.442869 kernel: BTRFS info (device sdb6): using free space tree Jan 29 12:22:38.442876 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 29 12:22:38.442883 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 29 12:22:38.340248 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 12:22:38.474677 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 29 12:22:38.485657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:22:38.526765 coreos-metadata[1008]: Jan 29 12:22:38.500 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 12:22:38.546728 coreos-metadata[992]: Jan 29 12:22:38.500 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 12:22:38.485676 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:22:38.509765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:22:38.534884 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:22:38.570073 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:22:38.609609 initrd-setup-root[1022]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:22:38.619613 initrd-setup-root[1029]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:22:38.629656 initrd-setup-root[1036]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:22:38.639653 initrd-setup-root[1043]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:22:38.654849 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:22:38.673803 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:22:38.700292 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:22:38.717750 kernel: BTRFS info (device sdb6): last unmount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:38.710392 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:22:38.725739 ignition[1110]: INFO : Ignition 2.19.0 Jan 29 12:22:38.725739 ignition[1110]: INFO : Stage: mount Jan 29 12:22:38.725739 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:38.725739 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:38.725739 ignition[1110]: INFO : mount: mount passed Jan 29 12:22:38.725739 ignition[1110]: INFO : POST message to Packet Timeline Jan 29 12:22:38.725739 ignition[1110]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:38.736795 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:22:39.625157 coreos-metadata[992]: Jan 29 12:22:39.625 INFO Fetch successful Jan 29 12:22:39.656605 ignition[1110]: INFO : GET result: OK Jan 29 12:22:39.682260 coreos-metadata[992]: Jan 29 12:22:39.682 INFO wrote hostname ci-4081.3.0-a-eb3371d08a to /sysroot/etc/hostname Jan 29 12:22:39.683441 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:22:40.139037 ignition[1110]: INFO : Ignition finished successfully Jan 29 12:22:40.142119 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:22:40.304629 coreos-metadata[1008]: Jan 29 12:22:40.304 INFO Fetch successful Jan 29 12:22:40.382065 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 29 12:22:40.382130 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 29 12:22:40.419767 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:22:40.430084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:22:40.484549 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1134) Jan 29 12:22:40.484594 kernel: BTRFS info (device sdb6): first mount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:40.514560 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:40.532939 kernel: BTRFS info (device sdb6): using free space tree Jan 29 12:22:40.572574 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 29 12:22:40.572590 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 29 12:22:40.585941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:22:40.614135 ignition[1151]: INFO : Ignition 2.19.0 Jan 29 12:22:40.614135 ignition[1151]: INFO : Stage: files Jan 29 12:22:40.629809 ignition[1151]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:40.629809 ignition[1151]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:40.629809 ignition[1151]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:22:40.629809 ignition[1151]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:22:40.618232 unknown[1151]: wrote ssh authorized keys file for user: core Jan 29 12:22:40.764725 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:22:40.847583 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:22:40.847583 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 12:22:41.322897 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:22:41.467343 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:41.467343 ignition[1151]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: files passed Jan 29 12:22:41.497862 ignition[1151]: INFO : POST message to Packet Timeline Jan 29 12:22:41.497862 ignition[1151]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:43.275573 ignition[1151]: INFO : GET result: OK Jan 29 12:22:43.646053 ignition[1151]: INFO : Ignition finished successfully Jan 29 12:22:43.648719 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:22:43.678884 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:22:43.689224 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:22:43.699020 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:22:43.699078 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:22:43.741323 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:22:43.758038 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:22:43.788811 initrd-setup-root-after-ignition[1190]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:22:43.788811 initrd-setup-root-after-ignition[1190]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:22:43.802899 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:22:43.790895 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:22:43.888376 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:22:43.888525 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:22:43.909216 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:22:43.930804 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:22:43.950985 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:22:43.960911 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:22:44.041455 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:22:44.070885 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:22:44.076208 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:22:44.103882 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:22:44.125041 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:22:44.135417 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:22:44.135852 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:22:44.181035 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:22:44.191266 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:22:44.201438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:22:44.227268 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:22:44.249270 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:22:44.259438 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:22:44.278435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:22:44.295477 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:22:44.327284 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:22:44.337417 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:22:44.362146 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:22:44.362587 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:22:44.387269 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:22:44.397445 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:22:44.427114 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:22:44.427574 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:22:44.449139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:22:44.449565 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:22:44.487933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:22:44.488376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:22:44.509475 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:22:44.527126 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:22:44.530752 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:22:44.549275 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:22:44.557531 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:22:44.574388 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:22:44.574729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:22:44.604276 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:22:44.604608 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:22:44.622340 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:22:44.622774 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:22:44.641339 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:22:44.641749 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:22:44.660337 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 12:22:44.761651 ignition[1215]: INFO : Ignition 2.19.0 Jan 29 12:22:44.761651 ignition[1215]: INFO : Stage: umount Jan 29 12:22:44.761651 ignition[1215]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:44.761651 ignition[1215]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:44.761651 ignition[1215]: INFO : umount: umount passed Jan 29 12:22:44.761651 ignition[1215]: INFO : POST message to Packet Timeline Jan 29 12:22:44.761651 ignition[1215]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:44.660767 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:22:44.688659 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:22:44.724688 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:22:44.724859 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:22:44.760802 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:22:44.769614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:22:44.769766 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:22:44.793962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:22:44.794069 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:22:44.857635 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:22:44.859679 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:22:44.859932 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:22:44.874744 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:22:44.874998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:22:45.843985 ignition[1215]: INFO : GET result: OK Jan 29 12:22:46.187034 ignition[1215]: INFO : Ignition finished successfully Jan 29 12:22:46.190256 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:22:46.190567 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:22:46.207916 systemd[1]: Stopped target network.target - Network. Jan 29 12:22:46.222785 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:22:46.223054 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:22:46.240972 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:22:46.241114 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:22:46.259042 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:22:46.259200 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:22:46.267208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:22:46.267368 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:22:46.294032 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:22:46.294201 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:22:46.312397 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:22:46.327693 systemd-networkd[929]: enp1s0f0np0: DHCPv6 lease lost Jan 29 12:22:46.330024 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:22:46.341766 systemd-networkd[929]: enp1s0f1np1: DHCPv6 lease lost Jan 29 12:22:46.348507 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:22:46.348846 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:22:46.368247 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:22:46.368672 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:22:46.388616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:22:46.388735 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:22:46.421752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:22:46.445693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:22:46.445735 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:22:46.464866 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:22:46.464960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:22:46.484956 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:22:46.485141 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:22:46.502945 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:22:46.503125 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:22:46.522209 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:22:46.543923 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:22:46.544336 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:22:46.584675 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:22:46.584822 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:22:46.600988 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:22:46.601089 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:22:46.621902 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:22:46.622044 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:22:46.660737 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:22:46.660992 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:22:46.690976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:22:46.691220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:46.749676 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:22:46.772727 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:22:46.772977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:22:46.976717 systemd-journald[264]: Received SIGTERM from PID 1 (systemd). Jan 29 12:22:46.793845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:22:46.794013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:46.815903 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:22:46.816176 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:22:46.849882 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:22:46.850210 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:22:46.865869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:22:46.898631 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:22:46.920923 systemd[1]: Switching root. Jan 29 12:22:47.060739 systemd-journald[264]: Journal stopped Jan 29 12:22:21.997035 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 29 12:22:21.997049 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:22:21.997055 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:22:21.997061 kernel: BIOS-provided physical RAM map: Jan 29 12:22:21.997065 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 29 12:22:21.997069 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 29 12:22:21.997073 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 29 12:22:21.997078 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 29 12:22:21.997082 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 29 12:22:21.997086 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Jan 29 12:22:21.997090 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Jan 29 12:22:21.997095 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Jan 29 12:22:21.997099 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Jan 29 12:22:21.997103 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 29 12:22:21.997108 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 29 12:22:21.997113 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 29 12:22:21.997118 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 29 12:22:21.997123 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 29 12:22:21.997128 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 29 12:22:21.997132 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 12:22:21.997136 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 29 12:22:21.997141 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 29 12:22:21.997145 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 12:22:21.997150 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 29 12:22:21.997154 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 29 12:22:21.997159 kernel: NX (Execute Disable) protection: active Jan 29 12:22:21.997163 kernel: APIC: Static calls initialized Jan 29 12:22:21.997168 kernel: SMBIOS 3.2.1 present. Jan 29 12:22:21.997174 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 29 12:22:21.997178 kernel: tsc: Detected 3400.000 MHz processor Jan 29 12:22:21.997183 kernel: tsc: Detected 3399.906 MHz TSC Jan 29 12:22:21.997188 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:22:21.997193 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:22:21.997198 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 29 12:22:21.997203 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 29 12:22:21.997207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:22:21.997212 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 29 12:22:21.997217 kernel: Using GB pages for direct mapping Jan 29 12:22:21.997222 kernel: ACPI: Early table checksum verification disabled Jan 29 12:22:21.997227 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 29 12:22:21.997234 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 29 12:22:21.997239 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 29 12:22:21.997244 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 29 12:22:21.997249 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 29 12:22:21.997255 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 29 12:22:21.997260 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 29 12:22:21.997265 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 29 12:22:21.997270 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 29 12:22:21.997275 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 29 12:22:21.997280 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 29 12:22:21.997285 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 29 12:22:21.997291 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 29 12:22:21.997296 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997301 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 29 12:22:21.997306 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 29 12:22:21.997311 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997316 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997321 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 29 12:22:21.997326 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 29 12:22:21.997331 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997336 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 29 12:22:21.997341 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 29 12:22:21.997346 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 29 12:22:21.997351 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 29 12:22:21.997356 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 29 12:22:21.997362 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 29 12:22:21.997366 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 29 12:22:21.997371 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 29 12:22:21.997377 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 29 12:22:21.997382 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 29 12:22:21.997387 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 29 12:22:21.997392 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 29 12:22:21.997397 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 29 12:22:21.997402 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 29 12:22:21.997407 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 29 12:22:21.997412 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 29 12:22:21.997417 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 29 12:22:21.997423 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 29 12:22:21.997428 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 29 12:22:21.997433 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 29 12:22:21.997438 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 29 12:22:21.997443 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 29 12:22:21.997448 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 29 12:22:21.997453 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 29 12:22:21.997458 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 29 12:22:21.997462 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 29 12:22:21.997468 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 29 12:22:21.997473 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 29 12:22:21.997478 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 29 12:22:21.997483 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 29 12:22:21.997488 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 29 12:22:21.997493 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 29 12:22:21.997498 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 29 12:22:21.997503 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 29 12:22:21.997508 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 29 12:22:21.997513 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 29 12:22:21.997518 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 29 12:22:21.997523 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 29 12:22:21.997528 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 29 12:22:21.997533 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 29 12:22:21.997542 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 29 12:22:21.997547 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 29 12:22:21.997552 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 29 12:22:21.997557 kernel: No NUMA configuration found Jan 29 12:22:21.997562 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 29 12:22:21.997568 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 29 12:22:21.997573 kernel: Zone ranges: Jan 29 12:22:21.997578 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:22:21.997583 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:22:21.997588 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 29 12:22:21.997593 kernel: Movable zone start for each node Jan 29 12:22:21.997598 kernel: Early memory node ranges Jan 29 12:22:21.997603 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 29 12:22:21.997608 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 29 12:22:21.997614 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Jan 29 12:22:21.997619 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Jan 29 12:22:21.997624 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 29 12:22:21.997629 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 29 12:22:21.997638 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 29 12:22:21.997644 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 29 12:22:21.997649 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:22:21.997655 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 29 12:22:21.997661 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 29 12:22:21.997666 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 29 12:22:21.997672 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 29 12:22:21.997677 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 29 12:22:21.997683 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 29 12:22:21.997688 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 29 12:22:21.997693 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 29 12:22:21.997699 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 12:22:21.997704 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 12:22:21.997710 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 12:22:21.997716 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 12:22:21.997721 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 12:22:21.997726 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 12:22:21.997731 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 12:22:21.997737 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 12:22:21.997742 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 12:22:21.997747 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 12:22:21.997752 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 12:22:21.997758 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 12:22:21.997764 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 12:22:21.997769 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 12:22:21.997774 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 12:22:21.997779 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 12:22:21.997785 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 29 12:22:21.997790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:22:21.997795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:22:21.997801 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:22:21.997806 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:22:21.997812 kernel: TSC deadline timer available Jan 29 12:22:21.997818 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 29 12:22:21.997823 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 29 12:22:21.997829 kernel: Booting paravirtualized kernel on bare hardware Jan 29 12:22:21.997834 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:22:21.997840 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 12:22:21.997845 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 12:22:21.997850 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 12:22:21.997856 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 12:22:21.997862 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:22:21.997868 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:22:21.997873 kernel: random: crng init done Jan 29 12:22:21.997879 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 29 12:22:21.997884 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 29 12:22:21.997889 kernel: Fallback order for Node 0: 0 Jan 29 12:22:21.997895 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 29 12:22:21.997901 kernel: Policy zone: Normal Jan 29 12:22:21.997906 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:22:21.997912 kernel: software IO TLB: area num 16. Jan 29 12:22:21.997917 kernel: Memory: 32720300K/33452980K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 732420K reserved, 0K cma-reserved) Jan 29 12:22:21.997923 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 12:22:21.997928 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:22:21.997933 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:22:21.997939 kernel: Dynamic Preempt: voluntary Jan 29 12:22:21.997944 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:22:21.997951 kernel: rcu: RCU event tracing is enabled. Jan 29 12:22:21.997956 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 12:22:21.997962 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:22:21.997967 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:22:21.997972 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:22:21.997978 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:22:21.997983 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 12:22:21.997988 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 29 12:22:21.997994 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:22:21.997999 kernel: Console: colour dummy device 80x25 Jan 29 12:22:21.998005 kernel: printk: console [tty0] enabled Jan 29 12:22:21.998011 kernel: printk: console [ttyS1] enabled Jan 29 12:22:21.998016 kernel: ACPI: Core revision 20230628 Jan 29 12:22:21.998021 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 29 12:22:21.998027 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:22:21.998032 kernel: DMAR: Host address width 39 Jan 29 12:22:21.998037 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 29 12:22:21.998043 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 29 12:22:21.998048 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 29 12:22:21.998054 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 29 12:22:21.998060 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 29 12:22:21.998065 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 29 12:22:21.998070 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 29 12:22:21.998076 kernel: x2apic enabled Jan 29 12:22:21.998081 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 29 12:22:21.998087 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 29 12:22:21.998092 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 29 12:22:21.998097 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 29 12:22:21.998104 kernel: process: using mwait in idle threads Jan 29 12:22:21.998109 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:22:21.998114 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:22:21.998119 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:22:21.998125 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 12:22:21.998130 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 12:22:21.998135 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 12:22:21.998141 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:22:21.998146 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 12:22:21.998151 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 12:22:21.998156 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:22:21.998163 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:22:21.998168 kernel: TAA: Mitigation: TSX disabled Jan 29 12:22:21.998174 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 29 12:22:21.998179 kernel: SRBDS: Mitigation: Microcode Jan 29 12:22:21.998184 kernel: GDS: Mitigation: Microcode Jan 29 12:22:21.998189 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:22:21.998195 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:22:21.998200 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:22:21.998205 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 12:22:21.998211 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 12:22:21.998216 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:22:21.998222 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 12:22:21.998227 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 12:22:21.998233 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 29 12:22:21.998238 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:22:21.998243 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:22:21.998249 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:22:21.998254 kernel: landlock: Up and running. Jan 29 12:22:21.998259 kernel: SELinux: Initializing. Jan 29 12:22:21.998265 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:22:21.998270 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:22:21.998275 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 12:22:21.998282 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:22:21.998287 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:22:21.998292 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 12:22:21.998298 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 29 12:22:21.998303 kernel: ... version: 4 Jan 29 12:22:21.998309 kernel: ... bit width: 48 Jan 29 12:22:21.998314 kernel: ... generic registers: 4 Jan 29 12:22:21.998324 kernel: ... value mask: 0000ffffffffffff Jan 29 12:22:21.998337 kernel: ... max period: 00007fffffffffff Jan 29 12:22:21.998350 kernel: ... fixed-purpose events: 3 Jan 29 12:22:21.998360 kernel: ... event mask: 000000070000000f Jan 29 12:22:21.998373 kernel: signal: max sigframe size: 2032 Jan 29 12:22:21.998384 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 29 12:22:21.998390 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:22:21.998396 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:22:21.998401 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 29 12:22:21.998407 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:22:21.998412 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:22:21.998418 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 29 12:22:21.998424 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:22:21.998430 kernel: smp: Brought up 1 node, 16 CPUs Jan 29 12:22:21.998435 kernel: smpboot: Max logical packages: 1 Jan 29 12:22:21.998440 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 29 12:22:21.998446 kernel: devtmpfs: initialized Jan 29 12:22:21.998451 kernel: x86/mm: Memory block size: 128MB Jan 29 12:22:21.998456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Jan 29 12:22:21.998462 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 29 12:22:21.998468 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:22:21.998474 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 12:22:21.998479 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:22:21.998484 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:22:21.998490 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:22:21.998495 kernel: audit: type=2000 audit(1738153336.039:1): state=initialized audit_enabled=0 res=1 Jan 29 12:22:21.998500 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:22:21.998506 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:22:21.998511 kernel: cpuidle: using governor menu Jan 29 12:22:21.998517 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:22:21.998523 kernel: dca service started, version 1.12.1 Jan 29 12:22:21.998528 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 12:22:21.998533 kernel: PCI: Using configuration type 1 for base access Jan 29 12:22:21.998541 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 29 12:22:21.998546 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:22:21.998552 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:22:21.998557 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:22:21.998564 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:22:21.998569 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:22:21.998574 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:22:21.998580 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:22:21.998585 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:22:21.998590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:22:21.998596 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 29 12:22:21.998601 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998606 kernel: ACPI: SSDT 0xFFFF9B8F01601000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 29 12:22:21.998613 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998618 kernel: ACPI: SSDT 0xFFFF9B8F015FF800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 29 12:22:21.998623 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998629 kernel: ACPI: SSDT 0xFFFF9B8F015E4700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 29 12:22:21.998634 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998639 kernel: ACPI: SSDT 0xFFFF9B8F015F8000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 29 12:22:21.998644 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998650 kernel: ACPI: SSDT 0xFFFF9B8F01608000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 29 12:22:21.998655 kernel: ACPI: Dynamic OEM Table Load: Jan 29 12:22:21.998660 kernel: ACPI: SSDT 0xFFFF9B8F01606400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 29 12:22:21.998667 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 29 12:22:21.998672 kernel: ACPI: Interpreter enabled Jan 29 12:22:21.998677 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:22:21.998683 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:22:21.998688 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 29 12:22:21.998693 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 29 12:22:21.998698 kernel: HEST: Table parsing has been initialized. Jan 29 12:22:21.998704 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 29 12:22:21.998709 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:22:21.998715 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:22:21.998721 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 29 12:22:21.998726 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 29 12:22:21.998732 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 29 12:22:21.998737 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 29 12:22:21.998743 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 29 12:22:21.998748 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 29 12:22:21.998753 kernel: ACPI: \_TZ_.FN00: New power resource Jan 29 12:22:21.998759 kernel: ACPI: \_TZ_.FN01: New power resource Jan 29 12:22:21.998765 kernel: ACPI: \_TZ_.FN02: New power resource Jan 29 12:22:21.998770 kernel: ACPI: \_TZ_.FN03: New power resource Jan 29 12:22:21.998776 kernel: ACPI: \_TZ_.FN04: New power resource Jan 29 12:22:21.998781 kernel: ACPI: \PIN_: New power resource Jan 29 12:22:21.998786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 29 12:22:21.998857 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:22:21.998911 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 29 12:22:21.998959 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 29 12:22:21.998969 kernel: PCI host bridge to bus 0000:00 Jan 29 12:22:21.999018 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:22:21.999062 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:22:21.999105 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:22:21.999147 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 29 12:22:21.999190 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 29 12:22:21.999232 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 29 12:22:21.999295 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 29 12:22:21.999352 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 29 12:22:21.999401 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:21.999454 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 29 12:22:21.999503 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 29 12:22:21.999559 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 29 12:22:21.999611 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 29 12:22:21.999664 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 29 12:22:21.999734 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 29 12:22:21.999808 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 29 12:22:21.999862 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 29 12:22:21.999911 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 29 12:22:21.999961 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 29 12:22:22.000013 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 29 12:22:22.000062 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 12:22:22.000114 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 29 12:22:22.000164 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 12:22:22.000215 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 29 12:22:22.000266 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 29 12:22:22.000315 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 29 12:22:22.000374 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 29 12:22:22.000425 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 29 12:22:22.000473 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 29 12:22:22.000524 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 29 12:22:22.000578 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 29 12:22:22.000628 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 29 12:22:22.000679 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 29 12:22:22.000727 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 29 12:22:22.000775 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 29 12:22:22.000822 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 29 12:22:22.000870 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 29 12:22:22.000917 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 29 12:22:22.000969 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 29 12:22:22.001017 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 29 12:22:22.001071 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 29 12:22:22.001178 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001234 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 29 12:22:22.001284 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001336 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 29 12:22:22.001386 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001438 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 29 12:22:22.001489 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001575 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 29 12:22:22.001627 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.001680 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 29 12:22:22.001729 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 12:22:22.001784 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 29 12:22:22.001836 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 29 12:22:22.001888 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 29 12:22:22.001936 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 29 12:22:22.001988 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 29 12:22:22.002036 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 29 12:22:22.002093 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 29 12:22:22.002143 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 29 12:22:22.002196 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 29 12:22:22.002246 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 29 12:22:22.002296 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 12:22:22.002347 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 12:22:22.002401 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 29 12:22:22.002452 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 29 12:22:22.002502 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 29 12:22:22.002558 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 29 12:22:22.002609 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 12:22:22.002659 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 12:22:22.002710 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 12:22:22.002758 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 29 12:22:22.002807 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 12:22:22.002856 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 29 12:22:22.002912 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 29 12:22:22.002964 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 29 12:22:22.003015 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 29 12:22:22.003065 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 29 12:22:22.003115 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 29 12:22:22.003165 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.003215 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 29 12:22:22.003264 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 12:22:22.003315 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 29 12:22:22.003372 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 29 12:22:22.003423 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 29 12:22:22.003474 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 29 12:22:22.003524 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 29 12:22:22.003579 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 29 12:22:22.003630 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 29 12:22:22.003682 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 29 12:22:22.003732 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 12:22:22.003784 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 29 12:22:22.003834 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 29 12:22:22.003889 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 29 12:22:22.003940 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 29 12:22:22.003991 kernel: pci 0000:06:00.0: supports D1 D2 Jan 29 12:22:22.004042 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 12:22:22.004094 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 29 12:22:22.004143 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.004192 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.004244 kernel: pci_bus 0000:07: extended config space not accessible Jan 29 12:22:22.004302 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 29 12:22:22.004354 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 29 12:22:22.004407 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 29 12:22:22.004461 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 29 12:22:22.004514 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:22:22.004605 kernel: pci 0000:07:00.0: supports D1 D2 Jan 29 12:22:22.004657 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 12:22:22.004709 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 29 12:22:22.004759 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.004809 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.004819 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 29 12:22:22.004825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 29 12:22:22.004831 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 29 12:22:22.004837 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 29 12:22:22.004842 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 29 12:22:22.004848 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 29 12:22:22.004854 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 29 12:22:22.004859 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 29 12:22:22.004865 kernel: iommu: Default domain type: Translated Jan 29 12:22:22.004872 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:22:22.004877 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:22:22.004883 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:22:22.004889 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 29 12:22:22.004894 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Jan 29 12:22:22.004900 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 29 12:22:22.004905 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 29 12:22:22.004911 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 29 12:22:22.004916 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 29 12:22:22.004969 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 29 12:22:22.005021 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 29 12:22:22.005074 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:22:22.005082 kernel: vgaarb: loaded Jan 29 12:22:22.005088 kernel: clocksource: Switched to clocksource tsc-early Jan 29 12:22:22.005094 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:22:22.005099 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:22:22.005105 kernel: pnp: PnP ACPI init Jan 29 12:22:22.005153 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 29 12:22:22.005206 kernel: pnp 00:02: [dma 0 disabled] Jan 29 12:22:22.005256 kernel: pnp 00:03: [dma 0 disabled] Jan 29 12:22:22.005305 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 29 12:22:22.005349 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 29 12:22:22.005397 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 29 12:22:22.005445 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 29 12:22:22.005492 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 29 12:22:22.005540 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 29 12:22:22.005616 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 29 12:22:22.005664 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 29 12:22:22.005709 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 29 12:22:22.005754 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 29 12:22:22.005798 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 29 12:22:22.005849 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 29 12:22:22.005893 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 29 12:22:22.005938 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 29 12:22:22.005981 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 29 12:22:22.006026 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 29 12:22:22.006070 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 29 12:22:22.006116 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 29 12:22:22.006165 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 29 12:22:22.006174 kernel: pnp: PnP ACPI: found 10 devices Jan 29 12:22:22.006180 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:22:22.006186 kernel: NET: Registered PF_INET protocol family Jan 29 12:22:22.006191 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:22:22.006197 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 29 12:22:22.006203 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:22:22.006209 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:22:22.006216 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:22:22.006222 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 29 12:22:22.006228 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:22:22.006233 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 12:22:22.006239 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:22:22.006245 kernel: NET: Registered PF_XDP protocol family Jan 29 12:22:22.006294 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 29 12:22:22.006342 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 29 12:22:22.006394 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 29 12:22:22.006446 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006495 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006565 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006629 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 12:22:22.006680 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 12:22:22.006729 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 29 12:22:22.006779 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 12:22:22.006830 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 29 12:22:22.006879 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 29 12:22:22.006927 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 12:22:22.006976 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 29 12:22:22.007025 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 29 12:22:22.007076 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 12:22:22.007125 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 29 12:22:22.007174 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 29 12:22:22.007225 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 29 12:22:22.007273 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.007323 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.007372 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 29 12:22:22.007421 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 29 12:22:22.007470 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 29 12:22:22.007518 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 29 12:22:22.007598 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:22:22.007643 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:22:22.007685 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:22:22.007729 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 29 12:22:22.007771 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 29 12:22:22.007823 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 29 12:22:22.007871 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 12:22:22.007921 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 29 12:22:22.007966 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 29 12:22:22.008015 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 12:22:22.008060 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 29 12:22:22.008109 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 29 12:22:22.008157 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 29 12:22:22.008204 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 29 12:22:22.008251 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 29 12:22:22.008259 kernel: PCI: CLS 64 bytes, default 64 Jan 29 12:22:22.008265 kernel: DMAR: No ATSR found Jan 29 12:22:22.008271 kernel: DMAR: No SATC found Jan 29 12:22:22.008277 kernel: DMAR: dmar0: Using Queued invalidation Jan 29 12:22:22.008325 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 29 12:22:22.008378 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 29 12:22:22.008426 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 29 12:22:22.008475 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 29 12:22:22.008524 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 29 12:22:22.008608 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 29 12:22:22.008656 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 29 12:22:22.008704 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 29 12:22:22.008753 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 29 12:22:22.008801 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 29 12:22:22.008852 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 29 12:22:22.008900 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 29 12:22:22.008949 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 29 12:22:22.008997 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 29 12:22:22.009045 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 29 12:22:22.009093 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 29 12:22:22.009142 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 29 12:22:22.009190 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 29 12:22:22.009241 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 29 12:22:22.009290 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 29 12:22:22.009339 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 29 12:22:22.009388 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 29 12:22:22.009438 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 29 12:22:22.009489 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 29 12:22:22.009540 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 29 12:22:22.009627 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 29 12:22:22.009680 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 29 12:22:22.009689 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 29 12:22:22.009695 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:22:22.009701 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 29 12:22:22.009706 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 29 12:22:22.009712 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 29 12:22:22.009718 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 29 12:22:22.009723 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 29 12:22:22.009776 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 29 12:22:22.009786 kernel: Initialise system trusted keyrings Jan 29 12:22:22.009792 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 29 12:22:22.009798 kernel: Key type asymmetric registered Jan 29 12:22:22.009803 kernel: Asymmetric key parser 'x509' registered Jan 29 12:22:22.009809 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:22:22.009815 kernel: io scheduler mq-deadline registered Jan 29 12:22:22.009820 kernel: io scheduler kyber registered Jan 29 12:22:22.009826 kernel: io scheduler bfq registered Jan 29 12:22:22.009874 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 29 12:22:22.009924 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 29 12:22:22.009972 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 29 12:22:22.010022 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 29 12:22:22.010071 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 29 12:22:22.010119 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 29 12:22:22.010173 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 29 12:22:22.010183 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 29 12:22:22.010189 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 29 12:22:22.010195 kernel: pstore: Using crash dump compression: deflate Jan 29 12:22:22.010201 kernel: pstore: Registered erst as persistent store backend Jan 29 12:22:22.010207 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:22:22.010212 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:22:22.010218 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:22:22.010224 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:22:22.010230 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 29 12:22:22.010279 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 29 12:22:22.010287 kernel: i8042: PNP: No PS/2 controller found. Jan 29 12:22:22.010331 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 29 12:22:22.010376 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 29 12:22:22.010421 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-29T12:22:20 UTC (1738153340) Jan 29 12:22:22.010465 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 29 12:22:22.010473 kernel: intel_pstate: Intel P-state driver initializing Jan 29 12:22:22.010481 kernel: intel_pstate: Disabling energy efficiency optimization Jan 29 12:22:22.010487 kernel: intel_pstate: HWP enabled Jan 29 12:22:22.010492 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 29 12:22:22.010498 kernel: vesafb: scrolling: redraw Jan 29 12:22:22.010504 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 29 12:22:22.010510 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000d92ef0da, using 768k, total 768k Jan 29 12:22:22.010515 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:22:22.010521 kernel: fb0: VESA VGA frame buffer device Jan 29 12:22:22.010527 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:22:22.010533 kernel: Segment Routing with IPv6 Jan 29 12:22:22.010542 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:22:22.010548 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:22:22.010554 kernel: Key type dns_resolver registered Jan 29 12:22:22.010580 kernel: microcode: Microcode Update Driver: v2.2. Jan 29 12:22:22.010585 kernel: IPI shorthand broadcast: enabled Jan 29 12:22:22.010607 kernel: sched_clock: Marking stable (2477424038, 1385639505)->(4406231711, -543168168) Jan 29 12:22:22.010613 kernel: registered taskstats version 1 Jan 29 12:22:22.010618 kernel: Loading compiled-in X.509 certificates Jan 29 12:22:22.010624 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:22:22.010631 kernel: Key type .fscrypt registered Jan 29 12:22:22.010636 kernel: Key type fscrypt-provisioning registered Jan 29 12:22:22.010642 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:22:22.010648 kernel: ima: No architecture policies found Jan 29 12:22:22.010653 kernel: clk: Disabling unused clocks Jan 29 12:22:22.010659 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:22:22.010665 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:22:22.010670 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:22:22.010677 kernel: Run /init as init process Jan 29 12:22:22.010683 kernel: with arguments: Jan 29 12:22:22.010689 kernel: /init Jan 29 12:22:22.010694 kernel: with environment: Jan 29 12:22:22.010700 kernel: HOME=/ Jan 29 12:22:22.010705 kernel: TERM=linux Jan 29 12:22:22.010711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:22:22.010718 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:22:22.010726 systemd[1]: Detected architecture x86-64. Jan 29 12:22:22.010732 systemd[1]: Running in initrd. Jan 29 12:22:22.010738 systemd[1]: No hostname configured, using default hostname. Jan 29 12:22:22.010743 systemd[1]: Hostname set to . Jan 29 12:22:22.010749 systemd[1]: Initializing machine ID from random generator. Jan 29 12:22:22.010755 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:22:22.010761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:22:22.010767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:22:22.010774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:22:22.010780 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:22:22.010786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:22:22.010792 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:22:22.010799 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:22:22.010805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:22:22.010811 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 29 12:22:22.010818 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 29 12:22:22.010823 kernel: clocksource: Switched to clocksource tsc Jan 29 12:22:22.010829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:22:22.010835 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:22:22.010841 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:22:22.010847 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:22:22.010853 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:22:22.010859 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:22:22.010866 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:22:22.010872 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:22:22.010878 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:22:22.010884 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:22:22.010890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:22:22.010896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:22:22.010902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:22:22.010908 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:22:22.010913 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:22:22.010921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:22:22.010927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:22:22.010932 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:22:22.010938 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:22:22.010955 systemd-journald[264]: Collecting audit messages is disabled. Jan 29 12:22:22.010970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:22:22.010976 systemd-journald[264]: Journal started Jan 29 12:22:22.010990 systemd-journald[264]: Runtime Journal (/run/log/journal/df7db472609545f3897095c12bdb6813) is 8.0M, max 639.9M, 631.9M free. Jan 29 12:22:22.033447 systemd-modules-load[265]: Inserted module 'overlay' Jan 29 12:22:22.061647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:22.107571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:22:22.107588 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:22:22.126466 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:22:22.126568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:22:22.126655 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:22:22.127593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:22:22.144366 systemd-modules-load[265]: Inserted module 'br_netfilter' Jan 29 12:22:22.144540 kernel: Bridge firewalling registered Jan 29 12:22:22.144806 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:22:22.214102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:22:22.234352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:22.263920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:22:22.274928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:22:22.318782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:22:22.319222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:22:22.349743 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:22:22.358141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:22:22.359884 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:22:22.363945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:22:22.370765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:22.379297 systemd-resolved[295]: Positive Trust Anchors: Jan 29 12:22:22.379304 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:22:22.379329 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:22:22.380921 systemd-resolved[295]: Defaulting to hostname 'linux'. Jan 29 12:22:22.382851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:22:22.404756 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:22:22.424742 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:22:22.534767 dracut-cmdline[307]: dracut-dracut-053 Jan 29 12:22:22.541834 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:22:22.738568 kernel: SCSI subsystem initialized Jan 29 12:22:22.761543 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:22:22.784574 kernel: iscsi: registered transport (tcp) Jan 29 12:22:22.816864 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:22:22.816882 kernel: QLogic iSCSI HBA Driver Jan 29 12:22:22.850800 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:22:22.872832 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:22:22.929582 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:22:22.929595 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:22:22.949432 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:22:23.009618 kernel: raid6: avx2x4 gen() 51398 MB/s Jan 29 12:22:23.041577 kernel: raid6: avx2x2 gen() 52407 MB/s Jan 29 12:22:23.078157 kernel: raid6: avx2x1 gen() 44460 MB/s Jan 29 12:22:23.078175 kernel: raid6: using algorithm avx2x2 gen() 52407 MB/s Jan 29 12:22:23.126149 kernel: raid6: .... xor() 30825 MB/s, rmw enabled Jan 29 12:22:23.126167 kernel: raid6: using avx2x2 recovery algorithm Jan 29 12:22:23.167570 kernel: xor: automatically using best checksumming function avx Jan 29 12:22:23.284602 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:22:23.289844 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:22:23.321811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:22:23.328592 systemd-udevd[493]: Using default interface naming scheme 'v255'. Jan 29 12:22:23.333683 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:22:23.365732 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:22:23.414849 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Jan 29 12:22:23.432243 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:22:23.456833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:22:23.516321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:22:23.561147 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 12:22:23.561163 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 12:22:23.531670 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:22:23.577542 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:22:23.577371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:22:23.610478 kernel: PTP clock support registered Jan 29 12:22:23.610499 kernel: ACPI: bus type USB registered Jan 29 12:22:23.610517 kernel: usbcore: registered new interface driver usbfs Jan 29 12:22:23.577528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:23.655646 kernel: usbcore: registered new interface driver hub Jan 29 12:22:23.655661 kernel: usbcore: registered new device driver usb Jan 29 12:22:23.655602 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:22:23.699895 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:22:23.699911 kernel: libata version 3.00 loaded. Jan 29 12:22:23.699919 kernel: AES CTR mode by8 optimization enabled Jan 29 12:22:23.699927 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 29 12:22:23.655630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:22:23.724966 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 29 12:22:23.655757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:24.395873 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 12:22:24.396047 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 29 12:22:24.396119 kernel: pps pps0: new PPS source ptp0 Jan 29 12:22:24.396183 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 29 12:22:24.396246 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 29 12:22:24.396313 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 12:22:24.396373 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 12:22:24.396433 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 29 12:22:24.396493 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:54 Jan 29 12:22:24.396611 kernel: ahci 0000:00:17.0: version 3.0 Jan 29 12:22:24.396676 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 29 12:22:24.396735 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 29 12:22:24.396794 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 29 12:22:24.396852 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 29 12:22:24.396912 kernel: hub 1-0:1.0: USB hub found Jan 29 12:22:24.396978 kernel: scsi host0: ahci Jan 29 12:22:24.397040 kernel: scsi host1: ahci Jan 29 12:22:24.397101 kernel: scsi host2: ahci Jan 29 12:22:24.397158 kernel: scsi host3: ahci Jan 29 12:22:24.397214 kernel: scsi host4: ahci Jan 29 12:22:24.397272 kernel: scsi host5: ahci Jan 29 12:22:24.397328 kernel: scsi host6: ahci Jan 29 12:22:24.397388 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 133 Jan 29 12:22:24.397397 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 133 Jan 29 12:22:24.397405 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 133 Jan 29 12:22:24.397412 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 133 Jan 29 12:22:24.397419 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 133 Jan 29 12:22:24.397426 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 133 Jan 29 12:22:24.397433 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 133 Jan 29 12:22:24.397440 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 12:22:24.397505 kernel: pps pps1: new PPS source ptp1 Jan 29 12:22:24.397589 kernel: hub 1-0:1.0: 16 ports detected Jan 29 12:22:24.397668 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 29 12:22:24.397733 kernel: hub 2-0:1.0: USB hub found Jan 29 12:22:24.397796 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 12:22:24.397858 kernel: hub 2-0:1.0: 10 ports detected Jan 29 12:22:24.397915 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:55 Jan 29 12:22:24.397978 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 29 12:22:24.398047 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 29 12:22:24.398109 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.398118 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 12:22:24.398179 kernel: hub 1-14:1.0: USB hub found Jan 29 12:22:24.398243 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 12:22:24.398252 kernel: hub 1-14:1.0: 4 ports detected Jan 29 12:22:24.398312 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.398320 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 12:22:24.398328 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 12:22:24.398335 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:23.755691 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:24.455584 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 12:22:24.455600 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.455609 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 12:22:24.471573 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 12:22:24.471591 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 12:22:24.497309 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 29 12:22:25.087809 kernel: ata1.00: Features: NCQ-prio Jan 29 12:22:25.087825 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 12:22:25.087915 kernel: ata2.00: Features: NCQ-prio Jan 29 12:22:25.087929 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 29 12:22:25.088051 kernel: ata1.00: configured for UDMA/133 Jan 29 12:22:25.088062 kernel: ata2.00: configured for UDMA/133 Jan 29 12:22:25.088071 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 12:22:25.088151 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 12:22:25.088224 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 29 12:22:25.088302 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 12:22:25.088315 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 29 12:22:25.088390 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.088400 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 12:22:25.088470 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 12:22:25.088542 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 12:22:25.088657 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 12:22:25.088727 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 29 12:22:25.088794 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 12:22:25.088863 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 29 12:22:25.088932 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 12:22:25.088943 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 12:22:25.089008 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:22:25.089019 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 29 12:22:25.089086 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 12:22:25.089160 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 29 12:22:25.089227 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 29 12:22:25.089302 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 29 12:22:25.089371 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 12:22:25.089439 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 29 12:22:25.089507 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.089517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:22:25.089527 kernel: GPT:9289727 != 937703087 Jan 29 12:22:25.089538 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:22:25.089548 kernel: GPT:9289727 != 937703087 Jan 29 12:22:25.089559 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:22:25.089597 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.089606 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 29 12:22:25.089688 kernel: usbcore: registered new interface driver usbhid Jan 29 12:22:25.089698 kernel: usbhid: USB HID core driver Jan 29 12:22:25.089707 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (582) Jan 29 12:22:25.089717 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:22:25.089789 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (585) Jan 29 12:22:25.089801 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 29 12:22:25.776590 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 29 12:22:25.776634 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 12:22:25.776920 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 29 12:22:25.777177 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 29 12:22:25.777205 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 29 12:22:25.777455 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.777514 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.777573 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.777617 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.777658 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:25.777700 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 12:22:25.778027 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:25.778054 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 29 12:22:25.778284 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:22:24.509770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:25.813032 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 29 12:22:25.813118 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 29 12:22:24.561914 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:22:24.603708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:22:24.619511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:22:24.619548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:22:24.644688 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:22:24.945800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:22:25.886682 disk-uuid[704]: Primary Header is updated. Jan 29 12:22:25.886682 disk-uuid[704]: Secondary Entries is updated. Jan 29 12:22:25.886682 disk-uuid[704]: Secondary Header is updated. Jan 29 12:22:24.945849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:25.080694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:25.213757 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:22:25.246966 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 29 12:22:25.283308 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 29 12:22:25.311742 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 12:22:25.333572 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 12:22:25.348319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 29 12:22:25.359733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:25.380613 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:22:25.397060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:22:25.428970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:26.481304 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 12:22:26.502325 disk-uuid[705]: The operation has completed successfully. Jan 29 12:22:26.510756 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 29 12:22:26.534498 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:22:26.534550 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:22:26.583810 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:22:26.609715 sh[745]: Success Jan 29 12:22:26.618623 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:22:26.655843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:22:26.675421 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:22:26.697891 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:22:26.739987 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:22:26.740007 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:26.762394 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:22:26.782412 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:22:26.801360 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:22:26.840583 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:22:26.842827 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:22:26.853036 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:22:26.865878 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:22:26.914558 kernel: BTRFS info (device sdb6): first mount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:26.914609 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:26.914617 kernel: BTRFS info (device sdb6): using free space tree Jan 29 12:22:26.922073 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:22:26.992452 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 29 12:22:26.992463 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 29 12:22:27.016581 kernel: BTRFS info (device sdb6): last unmount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:27.024155 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:22:27.043732 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:22:27.070586 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:22:27.094688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:22:27.101853 ignition[833]: Ignition 2.19.0 Jan 29 12:22:27.101859 ignition[833]: Stage: fetch-offline Jan 29 12:22:27.104034 unknown[833]: fetched base config from "system" Jan 29 12:22:27.101877 ignition[833]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:27.104038 unknown[833]: fetched user config from "system" Jan 29 12:22:27.101883 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:27.104905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:22:27.101939 ignition[833]: parsed url from cmdline: "" Jan 29 12:22:27.136792 systemd-networkd[929]: lo: Link UP Jan 29 12:22:27.101941 ignition[833]: no config URL provided Jan 29 12:22:27.136794 systemd-networkd[929]: lo: Gained carrier Jan 29 12:22:27.101944 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:22:27.139836 systemd-networkd[929]: Enumeration completed Jan 29 12:22:27.101966 ignition[833]: parsing config with SHA512: e68022bdebc88343ee1a65c5fbcf48196ea4c9ef41629399b60951a22552c1e848ec03eafd5b5351a51bb888c3077b1b135b27bb0d5be38c0b0085898d8132d7 Jan 29 12:22:27.139900 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:22:27.104253 ignition[833]: fetch-offline: fetch-offline passed Jan 29 12:22:27.140688 systemd-networkd[929]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.104256 ignition[833]: POST message to Packet Timeline Jan 29 12:22:27.158821 systemd[1]: Reached target network.target - Network. Jan 29 12:22:27.104259 ignition[833]: POST Status error: resource requires networking Jan 29 12:22:27.168749 systemd-networkd[929]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.104295 ignition[833]: Ignition finished successfully Jan 29 12:22:27.174685 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:22:27.214933 ignition[943]: Ignition 2.19.0 Jan 29 12:22:27.187766 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:22:27.214949 ignition[943]: Stage: kargs Jan 29 12:22:27.198787 systemd-networkd[929]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.388664 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 29 12:22:27.215350 ignition[943]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:27.377956 systemd-networkd[929]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:22:27.215377 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:27.216997 ignition[943]: kargs: kargs passed Jan 29 12:22:27.217005 ignition[943]: POST message to Packet Timeline Jan 29 12:22:27.217028 ignition[943]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:27.218279 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41932->[::1]:53: read: connection refused Jan 29 12:22:27.418652 ignition[943]: GET https://metadata.packet.net/metadata: attempt #2 Jan 29 12:22:27.418929 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37451->[::1]:53: read: connection refused Jan 29 12:22:27.569664 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 29 12:22:27.570380 systemd-networkd[929]: eno1: Link UP Jan 29 12:22:27.570591 systemd-networkd[929]: eno2: Link UP Jan 29 12:22:27.570747 systemd-networkd[929]: enp1s0f0np0: Link UP Jan 29 12:22:27.570919 systemd-networkd[929]: enp1s0f0np0: Gained carrier Jan 29 12:22:27.583791 systemd-networkd[929]: enp1s0f1np1: Link UP Jan 29 12:22:27.619727 systemd-networkd[929]: enp1s0f0np0: DHCPv4 address 139.178.70.85/31, gateway 139.178.70.84 acquired from 145.40.83.140 Jan 29 12:22:27.819349 ignition[943]: GET https://metadata.packet.net/metadata: attempt #3 Jan 29 12:22:27.820502 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34631->[::1]:53: read: connection refused Jan 29 12:22:28.432238 systemd-networkd[929]: enp1s0f1np1: Gained carrier Jan 29 12:22:28.621035 ignition[943]: GET https://metadata.packet.net/metadata: attempt #4 Jan 29 12:22:28.622179 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35481->[::1]:53: read: connection refused Jan 29 12:22:28.752160 systemd-networkd[929]: enp1s0f0np0: Gained IPv6LL Jan 29 12:22:30.160157 systemd-networkd[929]: enp1s0f1np1: Gained IPv6LL Jan 29 12:22:30.223782 ignition[943]: GET https://metadata.packet.net/metadata: attempt #5 Jan 29 12:22:30.224814 ignition[943]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40879->[::1]:53: read: connection refused Jan 29 12:22:33.427575 ignition[943]: GET https://metadata.packet.net/metadata: attempt #6 Jan 29 12:22:35.349860 ignition[943]: GET result: OK Jan 29 12:22:35.717435 ignition[943]: Ignition finished successfully Jan 29 12:22:35.721429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:22:35.756141 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:22:35.784445 ignition[961]: Ignition 2.19.0 Jan 29 12:22:35.784457 ignition[961]: Stage: disks Jan 29 12:22:35.784743 ignition[961]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:35.784760 ignition[961]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:35.786154 ignition[961]: disks: disks passed Jan 29 12:22:35.786161 ignition[961]: POST message to Packet Timeline Jan 29 12:22:35.786181 ignition[961]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:37.165893 ignition[961]: GET result: OK Jan 29 12:22:37.981378 ignition[961]: Ignition finished successfully Jan 29 12:22:37.984844 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:22:38.000882 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:22:38.018822 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:22:38.039896 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:22:38.060883 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:22:38.080876 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:22:38.110016 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:22:38.134530 systemd-fsck[981]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:22:38.144950 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:22:38.145518 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:22:38.271356 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:22:38.286790 kernel: EXT4-fs (sdb9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:22:38.271616 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:22:38.308797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:22:38.317975 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:22:38.442837 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (990) Jan 29 12:22:38.442853 kernel: BTRFS info (device sdb6): first mount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:38.442862 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:38.442869 kernel: BTRFS info (device sdb6): using free space tree Jan 29 12:22:38.442876 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 29 12:22:38.442883 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 29 12:22:38.340248 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 12:22:38.474677 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 29 12:22:38.485657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:22:38.526765 coreos-metadata[1008]: Jan 29 12:22:38.500 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 12:22:38.546728 coreos-metadata[992]: Jan 29 12:22:38.500 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 12:22:38.485676 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:22:38.509765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:22:38.534884 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:22:38.570073 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:22:38.609609 initrd-setup-root[1022]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:22:38.619613 initrd-setup-root[1029]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:22:38.629656 initrd-setup-root[1036]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:22:38.639653 initrd-setup-root[1043]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:22:38.654849 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:22:38.673803 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:22:38.700292 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:22:38.717750 kernel: BTRFS info (device sdb6): last unmount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:38.710392 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:22:38.725739 ignition[1110]: INFO : Ignition 2.19.0 Jan 29 12:22:38.725739 ignition[1110]: INFO : Stage: mount Jan 29 12:22:38.725739 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:38.725739 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:38.725739 ignition[1110]: INFO : mount: mount passed Jan 29 12:22:38.725739 ignition[1110]: INFO : POST message to Packet Timeline Jan 29 12:22:38.725739 ignition[1110]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:38.736795 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:22:39.625157 coreos-metadata[992]: Jan 29 12:22:39.625 INFO Fetch successful Jan 29 12:22:39.656605 ignition[1110]: INFO : GET result: OK Jan 29 12:22:39.682260 coreos-metadata[992]: Jan 29 12:22:39.682 INFO wrote hostname ci-4081.3.0-a-eb3371d08a to /sysroot/etc/hostname Jan 29 12:22:39.683441 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:22:40.139037 ignition[1110]: INFO : Ignition finished successfully Jan 29 12:22:40.142119 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:22:40.304629 coreos-metadata[1008]: Jan 29 12:22:40.304 INFO Fetch successful Jan 29 12:22:40.382065 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 29 12:22:40.382130 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 29 12:22:40.419767 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:22:40.430084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:22:40.484549 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1134) Jan 29 12:22:40.484594 kernel: BTRFS info (device sdb6): first mount of filesystem 9d466b86-c2df-4708-b519-b57ad5c10cf7 Jan 29 12:22:40.514560 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:22:40.532939 kernel: BTRFS info (device sdb6): using free space tree Jan 29 12:22:40.572574 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 29 12:22:40.572590 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 29 12:22:40.585941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:22:40.614135 ignition[1151]: INFO : Ignition 2.19.0 Jan 29 12:22:40.614135 ignition[1151]: INFO : Stage: files Jan 29 12:22:40.629809 ignition[1151]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:40.629809 ignition[1151]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:40.629809 ignition[1151]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:22:40.629809 ignition[1151]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:22:40.629809 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:22:40.618232 unknown[1151]: wrote ssh authorized keys file for user: core Jan 29 12:22:40.764725 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:22:40.847583 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:22:40.847583 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:40.880848 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 12:22:41.322897 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:22:41.467343 ignition[1151]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 12:22:41.467343 ignition[1151]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:22:41.497862 ignition[1151]: INFO : files: files passed Jan 29 12:22:41.497862 ignition[1151]: INFO : POST message to Packet Timeline Jan 29 12:22:41.497862 ignition[1151]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:43.275573 ignition[1151]: INFO : GET result: OK Jan 29 12:22:43.646053 ignition[1151]: INFO : Ignition finished successfully Jan 29 12:22:43.648719 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:22:43.678884 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:22:43.689224 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:22:43.699020 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:22:43.699078 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:22:43.741323 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:22:43.758038 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:22:43.788811 initrd-setup-root-after-ignition[1190]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:22:43.788811 initrd-setup-root-after-ignition[1190]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:22:43.802899 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:22:43.790895 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:22:43.888376 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:22:43.888525 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:22:43.909216 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:22:43.930804 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:22:43.950985 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:22:43.960911 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:22:44.041455 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:22:44.070885 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:22:44.076208 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:22:44.103882 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:22:44.125041 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:22:44.135417 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:22:44.135852 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:22:44.181035 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:22:44.191266 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:22:44.201438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:22:44.227268 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:22:44.249270 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:22:44.259438 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:22:44.278435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:22:44.295477 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:22:44.327284 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:22:44.337417 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:22:44.362146 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:22:44.362587 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:22:44.387269 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:22:44.397445 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:22:44.427114 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:22:44.427574 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:22:44.449139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:22:44.449565 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:22:44.487933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:22:44.488376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:22:44.509475 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:22:44.527126 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:22:44.530752 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:22:44.549275 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:22:44.557531 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:22:44.574388 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:22:44.574729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:22:44.604276 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:22:44.604608 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:22:44.622340 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:22:44.622774 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:22:44.641339 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:22:44.641749 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:22:44.660337 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 12:22:44.761651 ignition[1215]: INFO : Ignition 2.19.0 Jan 29 12:22:44.761651 ignition[1215]: INFO : Stage: umount Jan 29 12:22:44.761651 ignition[1215]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:22:44.761651 ignition[1215]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 12:22:44.761651 ignition[1215]: INFO : umount: umount passed Jan 29 12:22:44.761651 ignition[1215]: INFO : POST message to Packet Timeline Jan 29 12:22:44.761651 ignition[1215]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 12:22:44.660767 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:22:44.688659 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:22:44.724688 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:22:44.724859 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:22:44.760802 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:22:44.769614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:22:44.769766 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:22:44.793962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:22:44.794069 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:22:44.857635 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:22:44.859679 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:22:44.859932 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:22:44.874744 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:22:44.874998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:22:45.843985 ignition[1215]: INFO : GET result: OK Jan 29 12:22:46.187034 ignition[1215]: INFO : Ignition finished successfully Jan 29 12:22:46.190256 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:22:46.190567 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:22:46.207916 systemd[1]: Stopped target network.target - Network. Jan 29 12:22:46.222785 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:22:46.223054 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:22:46.240972 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:22:46.241114 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:22:46.259042 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:22:46.259200 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:22:46.267208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:22:46.267368 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:22:46.294032 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:22:46.294201 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:22:46.312397 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:22:46.327693 systemd-networkd[929]: enp1s0f0np0: DHCPv6 lease lost Jan 29 12:22:46.330024 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:22:46.341766 systemd-networkd[929]: enp1s0f1np1: DHCPv6 lease lost Jan 29 12:22:46.348507 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:22:46.348846 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:22:46.368247 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:22:46.368672 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:22:46.388616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:22:46.388735 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:22:46.421752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:22:46.445693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:22:46.445735 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:22:46.464866 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:22:46.464960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:22:46.484956 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:22:46.485141 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:22:46.502945 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:22:46.503125 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:22:46.522209 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:22:46.543923 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:22:46.544336 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:22:46.584675 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:22:46.584822 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:22:46.600988 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:22:46.601089 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:22:46.621902 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:22:46.622044 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:22:46.660737 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:22:46.660992 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:22:46.690976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:22:46.691220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:22:46.749676 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:22:46.772727 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:22:46.772977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:22:46.976717 systemd-journald[264]: Received SIGTERM from PID 1 (systemd). Jan 29 12:22:46.793845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:22:46.794013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:46.815903 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:22:46.816176 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:22:46.849882 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:22:46.850210 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:22:46.865869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:22:46.898631 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:22:46.920923 systemd[1]: Switching root. Jan 29 12:22:47.060739 systemd-journald[264]: Journal stopped Jan 29 12:22:49.673977 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:22:49.673992 kernel: SELinux: policy capability open_perms=1 Jan 29 12:22:49.674000 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:22:49.674006 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:22:49.674012 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:22:49.674017 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:22:49.674023 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:22:49.674028 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:22:49.674034 kernel: audit: type=1403 audit(1738153367.271:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:22:49.674040 systemd[1]: Successfully loaded SELinux policy in 156.381ms. Jan 29 12:22:49.674048 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.935ms. Jan 29 12:22:49.674055 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:22:49.674061 systemd[1]: Detected architecture x86-64. Jan 29 12:22:49.674067 systemd[1]: Detected first boot. Jan 29 12:22:49.674073 systemd[1]: Hostname set to . Jan 29 12:22:49.674083 systemd[1]: Initializing machine ID from random generator. Jan 29 12:22:49.674089 zram_generator::config[1268]: No configuration found. Jan 29 12:22:49.674096 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:22:49.674102 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:22:49.674108 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:22:49.674114 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:22:49.674121 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:22:49.674128 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:22:49.674134 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:22:49.674141 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:22:49.674148 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:22:49.674154 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:22:49.674161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:22:49.674167 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:22:49.674174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:22:49.674181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:22:49.674187 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:22:49.674193 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:22:49.674200 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:22:49.674206 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:22:49.674213 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 29 12:22:49.674219 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:22:49.674227 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:22:49.674233 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:22:49.674240 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:22:49.674248 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:22:49.674255 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:22:49.674261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:22:49.674268 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:22:49.674275 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:22:49.674282 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:22:49.674289 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:22:49.674295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:22:49.674302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:22:49.674309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:22:49.674317 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:22:49.674324 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:22:49.674330 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:22:49.674337 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:22:49.674344 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:49.674351 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:22:49.674357 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:22:49.674365 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:22:49.674372 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:22:49.674379 systemd[1]: Reached target machines.target - Containers. Jan 29 12:22:49.674387 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:22:49.674394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:22:49.674400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:22:49.674407 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:22:49.674414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:22:49.674421 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:22:49.674428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:22:49.674435 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:22:49.674441 kernel: ACPI: bus type drm_connector registered Jan 29 12:22:49.674447 kernel: fuse: init (API version 7.39) Jan 29 12:22:49.674453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:22:49.674460 kernel: loop: module loaded Jan 29 12:22:49.674466 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:22:49.674473 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:22:49.674481 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:22:49.674488 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:22:49.674495 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:22:49.674501 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:22:49.674516 systemd-journald[1371]: Collecting audit messages is disabled. Jan 29 12:22:49.674533 systemd-journald[1371]: Journal started Jan 29 12:22:49.674550 systemd-journald[1371]: Runtime Journal (/run/log/journal/655f26c1e8ea427dbcdcd7fe733ccaa2) is 8.0M, max 639.9M, 631.9M free. Jan 29 12:22:47.777272 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:22:47.793269 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Jan 29 12:22:47.793607 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:22:49.703582 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:22:49.738680 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:22:49.772603 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:22:49.805557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:22:49.838754 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:22:49.838783 systemd[1]: Stopped verity-setup.service. Jan 29 12:22:49.899581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:49.920582 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:22:49.931114 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:22:49.940800 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:22:49.950805 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:22:49.960786 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:22:49.970773 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:22:49.980778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:22:49.990898 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:22:50.001970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:22:50.013245 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:22:50.013486 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:22:50.026495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:22:50.026928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:22:50.039470 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:22:50.039912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:22:50.051477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:22:50.051892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:22:50.064483 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:22:50.064901 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:22:50.076465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:22:50.076871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:22:50.088470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:22:50.100445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:22:50.113439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:22:50.126445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:22:50.163574 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:22:50.185815 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:22:50.196328 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:22:50.205739 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:22:50.205758 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:22:50.216441 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:22:50.239827 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:22:50.252605 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:22:50.262808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:22:50.263615 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:22:50.274227 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:22:50.284670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:22:50.285275 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:22:50.290677 systemd-journald[1371]: Time spent on flushing to /var/log/journal/655f26c1e8ea427dbcdcd7fe733ccaa2 is 14.476ms for 1372 entries. Jan 29 12:22:50.290677 systemd-journald[1371]: System Journal (/var/log/journal/655f26c1e8ea427dbcdcd7fe733ccaa2) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:22:50.329027 systemd-journald[1371]: Received client request to flush runtime journal. Jan 29 12:22:50.302694 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:22:50.303328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:22:50.322391 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:22:50.334401 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:22:50.350403 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:22:50.358558 kernel: loop0: detected capacity change from 0 to 8 Jan 29 12:22:50.359492 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:22:50.383572 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:22:50.393749 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:22:50.404775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:22:50.415778 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:22:50.432759 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:22:50.443578 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 12:22:50.453762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:22:50.463759 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:22:50.476429 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:22:50.511547 kernel: loop2: detected capacity change from 0 to 142488 Jan 29 12:22:50.513808 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:22:50.525335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:22:50.537147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:22:50.537599 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:22:50.549099 udevadm[1407]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:22:50.555786 systemd-tmpfiles[1421]: ACLs are not supported, ignoring. Jan 29 12:22:50.555796 systemd-tmpfiles[1421]: ACLs are not supported, ignoring. Jan 29 12:22:50.558484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:22:50.603567 kernel: loop3: detected capacity change from 0 to 140768 Jan 29 12:22:50.663382 ldconfig[1397]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:22:50.664611 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:22:50.682563 kernel: loop4: detected capacity change from 0 to 8 Jan 29 12:22:50.702573 kernel: loop5: detected capacity change from 0 to 205544 Jan 29 12:22:50.736877 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:22:50.746592 kernel: loop6: detected capacity change from 0 to 142488 Jan 29 12:22:50.776623 kernel: loop7: detected capacity change from 0 to 140768 Jan 29 12:22:50.777659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:22:50.790021 (sd-merge)[1427]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 29 12:22:50.790224 systemd-udevd[1430]: Using default interface naming scheme 'v255'. Jan 29 12:22:50.790258 (sd-merge)[1427]: Merged extensions into '/usr'. Jan 29 12:22:50.792396 systemd[1]: Reloading requested from client PID 1403 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:22:50.792402 systemd[1]: Reloading... Jan 29 12:22:50.828547 zram_generator::config[1517]: No configuration found. Jan 29 12:22:50.828602 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 29 12:22:50.828617 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1449) Jan 29 12:22:50.885547 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 12:22:50.908842 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:22:50.940549 kernel: IPMI message handler: version 39.2 Jan 29 12:22:50.949546 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:22:50.949595 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:22:50.957743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:22:50.977600 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 29 12:22:51.043150 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 29 12:22:51.043307 kernel: ipmi device interface Jan 29 12:22:51.043327 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 29 12:22:51.033171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 29 12:22:51.058158 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 29 12:22:51.058323 systemd[1]: Reloading finished in 265 ms. Jan 29 12:22:51.058541 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 29 12:22:51.058651 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 29 12:22:51.126545 kernel: iTCO_vendor_support: vendor-support=0 Jan 29 12:22:51.126573 kernel: ipmi_si: IPMI System Interface driver Jan 29 12:22:51.187359 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 29 12:22:51.204292 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 29 12:22:51.204308 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 29 12:22:51.204327 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 29 12:22:51.273928 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 29 12:22:51.274018 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 29 12:22:51.274097 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 29 12:22:51.274111 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 29 12:22:51.321097 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 29 12:22:51.331359 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 29 12:22:51.357572 kernel: intel_rapl_common: Found RAPL domain package Jan 29 12:22:51.357601 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 29 12:22:51.357692 kernel: intel_rapl_common: Found RAPL domain core Jan 29 12:22:51.357711 kernel: intel_rapl_common: Found RAPL domain dram Jan 29 12:22:51.433623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:22:51.456137 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:22:51.466566 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 29 12:22:51.504668 systemd[1]: Starting ensure-sysext.service... Jan 29 12:22:51.512143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:22:51.528904 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:22:51.539125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:22:51.539695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:22:51.541128 systemd[1]: Reloading requested from client PID 1607 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:22:51.541135 systemd[1]: Reloading... Jan 29 12:22:51.566543 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 29 12:22:51.573548 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 29 12:22:51.573622 zram_generator::config[1640]: No configuration found. Jan 29 12:22:51.606483 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:22:51.606743 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:22:51.607270 systemd-tmpfiles[1611]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:22:51.607446 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jan 29 12:22:51.607483 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jan 29 12:22:51.609192 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:22:51.609195 systemd-tmpfiles[1611]: Skipping /boot Jan 29 12:22:51.613478 systemd-tmpfiles[1611]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:22:51.613481 systemd-tmpfiles[1611]: Skipping /boot Jan 29 12:22:51.647312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:22:51.701949 systemd[1]: Reloading finished in 160 ms. Jan 29 12:22:51.715012 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:22:51.741739 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:22:51.753739 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:22:51.764704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:22:51.795685 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:22:51.807645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:22:51.814618 augenrules[1721]: No rules Jan 29 12:22:51.820300 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:22:51.848053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:22:51.854810 lvm[1726]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:22:51.859696 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:22:51.871189 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:22:51.883423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:22:51.894183 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:22:51.904766 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:22:51.915996 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:22:51.926902 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:22:51.938879 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:22:51.939995 systemd-networkd[1609]: lo: Link UP Jan 29 12:22:51.939998 systemd-networkd[1609]: lo: Gained carrier Jan 29 12:22:51.942358 systemd-networkd[1609]: bond0: netdev ready Jan 29 12:22:51.943259 systemd-networkd[1609]: Enumeration completed Jan 29 12:22:51.944513 systemd-networkd[1609]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:d5:7c.network. Jan 29 12:22:51.950812 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:22:51.964616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:22:51.974714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:51.974830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:22:51.976259 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:22:51.988271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:22:51.990148 lvm[1745]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:22:51.998250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:22:52.004576 systemd-resolved[1728]: Positive Trust Anchors: Jan 29 12:22:52.004583 systemd-resolved[1728]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:22:52.004609 systemd-resolved[1728]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:22:52.007221 systemd-resolved[1728]: Using system hostname 'ci-4081.3.0-a-eb3371d08a'. Jan 29 12:22:52.009454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:22:52.018802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:22:52.019517 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:22:52.031470 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:22:52.040638 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:22:52.040733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:52.041818 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:22:52.052986 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:22:52.064023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:22:52.064130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:22:52.075127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:22:52.075254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:22:52.088381 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:22:52.088599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:22:52.105927 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:22:52.115648 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 29 12:22:52.140349 systemd-networkd[1609]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:d5:7d.network. Jan 29 12:22:52.140549 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 29 12:22:52.140649 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:52.140899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:22:52.165783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:22:52.177246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:22:52.195812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:22:52.205668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:22:52.205794 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:22:52.205883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:52.206826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:22:52.206926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:22:52.219096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:22:52.219209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:22:52.231315 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:22:52.231485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:22:52.256295 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:52.256416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:22:52.265902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:22:52.276364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:22:52.287314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:22:52.304457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:22:52.308582 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 29 12:22:52.325728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:22:52.325848 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:22:52.325925 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:22:52.326819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:22:52.326925 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:22:52.330591 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 29 12:22:52.330581 systemd-networkd[1609]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 29 12:22:52.332158 systemd-networkd[1609]: enp1s0f0np0: Link UP Jan 29 12:22:52.332402 systemd-networkd[1609]: enp1s0f0np0: Gained carrier Jan 29 12:22:52.350952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:22:52.353546 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 29 12:22:52.357292 systemd-networkd[1609]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:d5:7c.network. Jan 29 12:22:52.357502 systemd-networkd[1609]: enp1s0f1np1: Link UP Jan 29 12:22:52.357728 systemd-networkd[1609]: enp1s0f1np1: Gained carrier Jan 29 12:22:52.364041 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:22:52.364149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:22:52.367850 systemd-networkd[1609]: bond0: Link UP Jan 29 12:22:52.368097 systemd-networkd[1609]: bond0: Gained carrier Jan 29 12:22:52.374956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:22:52.375063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:22:52.386837 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:22:52.386907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:22:52.397550 systemd[1]: Finished ensure-sysext.service. Jan 29 12:22:52.407031 systemd[1]: Reached target network.target - Network. Jan 29 12:22:52.415639 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:22:52.426641 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:22:52.426671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:22:52.439727 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:22:52.463414 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 29 12:22:52.463432 kernel: bond0: active interface up! Jan 29 12:22:52.494724 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:22:52.505704 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:22:52.515680 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:22:52.526629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:22:52.537621 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:22:52.548605 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:22:52.548621 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:22:52.556604 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:22:52.566707 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:22:52.576668 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:22:52.596658 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:22:52.597541 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 29 12:22:52.605832 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:22:52.616297 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:22:52.626183 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:22:52.635908 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:22:52.645664 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:22:52.655625 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:22:52.663638 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:22:52.663654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:22:52.671652 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:22:52.682303 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:22:52.692230 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:22:52.701156 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:22:52.704331 coreos-metadata[1779]: Jan 29 12:22:52.704 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 12:22:52.710642 dbus-daemon[1780]: [system] SELinux support is enabled Jan 29 12:22:52.711297 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:22:52.713053 jq[1783]: false Jan 29 12:22:52.720616 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:22:52.721220 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:22:52.728419 extend-filesystems[1785]: Found loop4 Jan 29 12:22:52.728419 extend-filesystems[1785]: Found loop5 Jan 29 12:22:52.784709 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jan 29 12:22:52.784731 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1446) Jan 29 12:22:52.784741 extend-filesystems[1785]: Found loop6 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found loop7 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sda Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb1 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb2 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb3 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found usr Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb4 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb6 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb7 Jan 29 12:22:52.784741 extend-filesystems[1785]: Found sdb9 Jan 29 12:22:52.784741 extend-filesystems[1785]: Checking size of /dev/sdb9 Jan 29 12:22:52.784741 extend-filesystems[1785]: Resized partition /dev/sdb9 Jan 29 12:22:52.731372 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:22:52.935780 extend-filesystems[1795]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:22:52.802647 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:22:52.808233 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:22:52.824024 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:22:52.861678 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 29 12:22:52.953919 sshd_keygen[1808]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:22:52.882972 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:22:52.954031 update_engine[1810]: I20250129 12:22:52.899930 1810 main.cc:92] Flatcar Update Engine starting Jan 29 12:22:52.954031 update_engine[1810]: I20250129 12:22:52.900681 1810 update_check_scheduler.cc:74] Next update check in 3m40s Jan 29 12:22:52.883400 systemd-logind[1805]: Watching system buttons on /dev/input/event3 (Power Button) Jan 29 12:22:52.954259 jq[1811]: true Jan 29 12:22:52.883410 systemd-logind[1805]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 12:22:52.883420 systemd-logind[1805]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 29 12:22:52.883552 systemd-logind[1805]: New seat seat0. Jan 29 12:22:52.892651 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:22:52.905259 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:22:52.927850 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:22:52.946911 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:22:52.972742 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:22:52.972839 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:22:52.973013 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:22:52.973103 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:22:52.983083 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:22:52.983169 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:22:52.993767 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:22:53.013401 jq[1823]: true Jan 29 12:22:53.014452 (ntainerd)[1825]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:22:53.017818 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 12:22:53.018771 tar[1820]: linux-amd64/helm Jan 29 12:22:53.022094 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:22:53.031848 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 29 12:22:53.031962 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 29 12:22:53.052783 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:22:53.060650 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:22:53.060780 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:22:53.070541 bash[1852]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:22:53.071704 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:22:53.071816 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:22:53.098752 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:22:53.110528 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:22:53.118741 locksmithd[1860]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:22:53.121859 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:22:53.121971 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:22:53.142807 systemd[1]: Starting sshkeys.service... Jan 29 12:22:53.150352 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:22:53.162502 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:22:53.174481 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:22:53.185959 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:22:53.192903 containerd[1825]: time="2025-01-29T12:22:53.192829041Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:22:53.197206 coreos-metadata[1874]: Jan 29 12:22:53.197 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 12:22:53.198557 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:22:53.206128 containerd[1825]: time="2025-01-29T12:22:53.206080129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:22:53.206941 containerd[1825]: time="2025-01-29T12:22:53.206898743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:22:53.206941 containerd[1825]: time="2025-01-29T12:22:53.206915114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:22:53.206941 containerd[1825]: time="2025-01-29T12:22:53.206924627Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:22:53.207050 containerd[1825]: time="2025-01-29T12:22:53.207014587Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:22:53.207050 containerd[1825]: time="2025-01-29T12:22:53.207025039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207087 containerd[1825]: time="2025-01-29T12:22:53.207059066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207087 containerd[1825]: time="2025-01-29T12:22:53.207067869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207193 containerd[1825]: time="2025-01-29T12:22:53.207159187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207193 containerd[1825]: time="2025-01-29T12:22:53.207168372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207193 containerd[1825]: time="2025-01-29T12:22:53.207175982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207193 containerd[1825]: time="2025-01-29T12:22:53.207181568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207251 containerd[1825]: time="2025-01-29T12:22:53.207223113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207358 containerd[1825]: time="2025-01-29T12:22:53.207350179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207411 containerd[1825]: time="2025-01-29T12:22:53.207402392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:22:53.207428 containerd[1825]: time="2025-01-29T12:22:53.207411120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:22:53.207463 containerd[1825]: time="2025-01-29T12:22:53.207455480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:22:53.207489 containerd[1825]: time="2025-01-29T12:22:53.207482349Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:22:53.207523 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 29 12:22:53.216808 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:22:53.221654 containerd[1825]: time="2025-01-29T12:22:53.221638049Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:22:53.221682 containerd[1825]: time="2025-01-29T12:22:53.221669188Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:22:53.221682 containerd[1825]: time="2025-01-29T12:22:53.221679652Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:22:53.221710 containerd[1825]: time="2025-01-29T12:22:53.221689191Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:22:53.221710 containerd[1825]: time="2025-01-29T12:22:53.221697362Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:22:53.221784 containerd[1825]: time="2025-01-29T12:22:53.221775939Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:22:53.221905 containerd[1825]: time="2025-01-29T12:22:53.221897405Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:22:53.221960 containerd[1825]: time="2025-01-29T12:22:53.221952999Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:22:53.221980 containerd[1825]: time="2025-01-29T12:22:53.221963812Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:22:53.221980 containerd[1825]: time="2025-01-29T12:22:53.221971255Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:22:53.222010 containerd[1825]: time="2025-01-29T12:22:53.221980284Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222010 containerd[1825]: time="2025-01-29T12:22:53.221988041Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222010 containerd[1825]: time="2025-01-29T12:22:53.221994875Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222010 containerd[1825]: time="2025-01-29T12:22:53.222002856Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222011116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222018498Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222026088Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222032729Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222044155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222052167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222058983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222067 containerd[1825]: time="2025-01-29T12:22:53.222066697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222073884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222081172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222087614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222094861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222101790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222114781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222121889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222129178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222135928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222143926Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222154743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222161806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222174 containerd[1825]: time="2025-01-29T12:22:53.222167720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222191856Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222201914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222208254Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222215343Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222220771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222227463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222235439Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:22:53.222355 containerd[1825]: time="2025-01-29T12:22:53.222241692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:22:53.222467 containerd[1825]: time="2025-01-29T12:22:53.222397347Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:22:53.222467 containerd[1825]: time="2025-01-29T12:22:53.222431986Z" level=info msg="Connect containerd service" Jan 29 12:22:53.222467 containerd[1825]: time="2025-01-29T12:22:53.222449901Z" level=info msg="using legacy CRI server" Jan 29 12:22:53.222467 containerd[1825]: time="2025-01-29T12:22:53.222454436Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:22:53.222590 containerd[1825]: time="2025-01-29T12:22:53.222506062Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:22:53.222810 containerd[1825]: time="2025-01-29T12:22:53.222799655Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:22:53.222952 containerd[1825]: time="2025-01-29T12:22:53.222923459Z" level=info msg="Start subscribing containerd event" Jan 29 12:22:53.222974 containerd[1825]: time="2025-01-29T12:22:53.222957572Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:22:53.222974 containerd[1825]: time="2025-01-29T12:22:53.222966941Z" level=info msg="Start recovering state" Jan 29 12:22:53.223003 containerd[1825]: time="2025-01-29T12:22:53.222981688Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:22:53.223021 containerd[1825]: time="2025-01-29T12:22:53.223010624Z" level=info msg="Start event monitor" Jan 29 12:22:53.223042 containerd[1825]: time="2025-01-29T12:22:53.223022598Z" level=info msg="Start snapshots syncer" Jan 29 12:22:53.223042 containerd[1825]: time="2025-01-29T12:22:53.223030699Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:22:53.223042 containerd[1825]: time="2025-01-29T12:22:53.223036646Z" level=info msg="Start streaming server" Jan 29 12:22:53.223091 containerd[1825]: time="2025-01-29T12:22:53.223071775Z" level=info msg="containerd successfully booted in 0.030888s" Jan 29 12:22:53.225901 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:22:53.287677 tar[1820]: linux-amd64/LICENSE Jan 29 12:22:53.287723 tar[1820]: linux-amd64/README.md Jan 29 12:22:53.295543 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jan 29 12:22:53.318184 extend-filesystems[1795]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jan 29 12:22:53.318184 extend-filesystems[1795]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 29 12:22:53.318184 extend-filesystems[1795]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jan 29 12:22:53.359577 extend-filesystems[1785]: Resized filesystem in /dev/sdb9 Jan 29 12:22:53.318675 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:22:53.318770 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:22:53.367815 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:22:53.455630 systemd-networkd[1609]: bond0: Gained IPv6LL Jan 29 12:22:53.456821 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:22:53.468320 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:22:53.486748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:22:53.497282 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:22:53.515653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:22:54.155218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:22:54.166132 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:22:54.585263 kubelet[1915]: E0129 12:22:54.585156 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:22:54.586310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:22:54.586386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:22:55.023462 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 29 12:22:55.023645 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 29 12:22:55.837277 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:22:55.853884 systemd[1]: Started sshd@0-139.178.70.85:22-139.178.89.65:35844.service - OpenSSH per-connection server daemon (139.178.89.65:35844). Jan 29 12:22:55.898344 sshd[1938]: Accepted publickey for core from 139.178.89.65 port 35844 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:22:55.899808 sshd[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:22:55.905215 systemd-logind[1805]: New session 1 of user core. Jan 29 12:22:55.906146 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:22:55.924881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:22:55.938982 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:22:55.967682 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:22:55.999106 (systemd)[1942]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:22:56.110927 systemd[1942]: Queued start job for default target default.target. Jan 29 12:22:56.127040 systemd[1942]: Created slice app.slice - User Application Slice. Jan 29 12:22:56.127081 systemd[1942]: Reached target paths.target - Paths. Jan 29 12:22:56.127106 systemd[1942]: Reached target timers.target - Timers. Jan 29 12:22:56.128795 systemd[1942]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:22:56.137592 systemd[1942]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:22:56.137619 systemd[1942]: Reached target sockets.target - Sockets. Jan 29 12:22:56.137627 systemd[1942]: Reached target basic.target - Basic System. Jan 29 12:22:56.137648 systemd[1942]: Reached target default.target - Main User Target. Jan 29 12:22:56.137663 systemd[1942]: Startup finished in 121ms. Jan 29 12:22:56.137772 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:22:56.149701 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:22:56.224111 systemd[1]: Started sshd@1-139.178.70.85:22-139.178.89.65:35858.service - OpenSSH per-connection server daemon (139.178.89.65:35858). Jan 29 12:22:56.263739 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 35858 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:22:56.264446 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:22:56.267049 systemd-logind[1805]: New session 2 of user core. Jan 29 12:22:56.275751 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:22:56.337822 sshd[1954]: pam_unix(sshd:session): session closed for user core Jan 29 12:22:56.358023 systemd[1]: sshd@1-139.178.70.85:22-139.178.89.65:35858.service: Deactivated successfully. Jan 29 12:22:56.358714 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:22:56.359322 systemd-logind[1805]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:22:56.360031 systemd[1]: Started sshd@2-139.178.70.85:22-139.178.89.65:35868.service - OpenSSH per-connection server daemon (139.178.89.65:35868). Jan 29 12:22:56.372464 systemd-logind[1805]: Removed session 2. Jan 29 12:22:56.402473 sshd[1961]: Accepted publickey for core from 139.178.89.65 port 35868 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:22:56.403670 sshd[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:22:56.408237 systemd-logind[1805]: New session 3 of user core. Jan 29 12:22:56.420003 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:22:56.489727 sshd[1961]: pam_unix(sshd:session): session closed for user core Jan 29 12:22:56.491168 systemd[1]: sshd@2-139.178.70.85:22-139.178.89.65:35868.service: Deactivated successfully. Jan 29 12:22:56.492028 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:22:56.492778 systemd-logind[1805]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:22:56.493409 systemd-logind[1805]: Removed session 3. Jan 29 12:22:57.814837 systemd-timesyncd[1774]: Contacted time server 69.89.207.99:123 (0.flatcar.pool.ntp.org). Jan 29 12:22:57.814995 systemd-timesyncd[1774]: Initial clock synchronization to Wed 2025-01-29 12:22:57.770643 UTC. Jan 29 12:22:58.305730 login[1885]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:22:58.306609 login[1890]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:22:58.308406 systemd-logind[1805]: New session 4 of user core. Jan 29 12:22:58.323762 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:22:58.325476 systemd-logind[1805]: New session 5 of user core. Jan 29 12:22:58.327092 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:23:00.020146 coreos-metadata[1874]: Jan 29 12:23:00.020 INFO Fetch successful Jan 29 12:23:00.099222 unknown[1874]: wrote ssh authorized keys file for user: core Jan 29 12:23:00.119592 update-ssh-keys[1993]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:23:00.119923 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:23:00.120744 systemd[1]: Finished sshkeys.service. Jan 29 12:23:00.521242 coreos-metadata[1779]: Jan 29 12:23:00.521 INFO Fetch successful Jan 29 12:23:00.614761 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:23:00.615825 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 29 12:23:00.929149 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 29 12:23:00.931772 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:23:00.932402 systemd[1]: Startup finished in 2.664s (kernel) + 26.272s (initrd) + 13.815s (userspace) = 42.752s. Jan 29 12:23:04.838086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:23:04.851853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:23:05.070426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:05.074106 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:23:05.097011 kubelet[2012]: E0129 12:23:05.096893 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:23:05.098914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:23:05.098991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:23:06.482563 systemd[1]: Started sshd@3-139.178.70.85:22-139.178.89.65:50498.service - OpenSSH per-connection server daemon (139.178.89.65:50498). Jan 29 12:23:06.515945 sshd[2031]: Accepted publickey for core from 139.178.89.65 port 50498 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:23:06.516631 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:23:06.519147 systemd-logind[1805]: New session 6 of user core. Jan 29 12:23:06.530791 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:23:06.582490 sshd[2031]: pam_unix(sshd:session): session closed for user core Jan 29 12:23:06.603240 systemd[1]: sshd@3-139.178.70.85:22-139.178.89.65:50498.service: Deactivated successfully. Jan 29 12:23:06.603983 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:23:06.604655 systemd-logind[1805]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:23:06.605343 systemd[1]: Started sshd@4-139.178.70.85:22-139.178.89.65:50500.service - OpenSSH per-connection server daemon (139.178.89.65:50500). Jan 29 12:23:06.605826 systemd-logind[1805]: Removed session 6. Jan 29 12:23:06.636993 sshd[2038]: Accepted publickey for core from 139.178.89.65 port 50500 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:23:06.637719 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:23:06.640378 systemd-logind[1805]: New session 7 of user core. Jan 29 12:23:06.640959 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:23:06.688619 sshd[2038]: pam_unix(sshd:session): session closed for user core Jan 29 12:23:06.702146 systemd[1]: sshd@4-139.178.70.85:22-139.178.89.65:50500.service: Deactivated successfully. Jan 29 12:23:06.704059 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:23:06.705391 systemd-logind[1805]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:23:06.705973 systemd[1]: Started sshd@5-139.178.70.85:22-139.178.89.65:50512.service - OpenSSH per-connection server daemon (139.178.89.65:50512). Jan 29 12:23:06.706397 systemd-logind[1805]: Removed session 7. Jan 29 12:23:06.737391 sshd[2045]: Accepted publickey for core from 139.178.89.65 port 50512 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:23:06.738076 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:23:06.740581 systemd-logind[1805]: New session 8 of user core. Jan 29 12:23:06.757790 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:23:06.809811 sshd[2045]: pam_unix(sshd:session): session closed for user core Jan 29 12:23:06.822223 systemd[1]: sshd@5-139.178.70.85:22-139.178.89.65:50512.service: Deactivated successfully. Jan 29 12:23:06.823005 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:23:06.823765 systemd-logind[1805]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:23:06.824630 systemd[1]: Started sshd@6-139.178.70.85:22-139.178.89.65:50514.service - OpenSSH per-connection server daemon (139.178.89.65:50514). Jan 29 12:23:06.825141 systemd-logind[1805]: Removed session 8. Jan 29 12:23:06.857400 sshd[2052]: Accepted publickey for core from 139.178.89.65 port 50514 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:23:06.858085 sshd[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:23:06.860513 systemd-logind[1805]: New session 9 of user core. Jan 29 12:23:06.877847 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:23:06.941306 sudo[2055]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:23:06.941458 sudo[2055]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:23:06.951164 sudo[2055]: pam_unix(sudo:session): session closed for user root Jan 29 12:23:06.952156 sshd[2052]: pam_unix(sshd:session): session closed for user core Jan 29 12:23:06.963637 systemd[1]: sshd@6-139.178.70.85:22-139.178.89.65:50514.service: Deactivated successfully. Jan 29 12:23:06.964561 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:23:06.965347 systemd-logind[1805]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:23:06.966223 systemd[1]: Started sshd@7-139.178.70.85:22-139.178.89.65:50522.service - OpenSSH per-connection server daemon (139.178.89.65:50522). Jan 29 12:23:06.966860 systemd-logind[1805]: Removed session 9. Jan 29 12:23:07.011271 sshd[2060]: Accepted publickey for core from 139.178.89.65 port 50522 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:23:07.012295 sshd[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:23:07.015667 systemd-logind[1805]: New session 10 of user core. Jan 29 12:23:07.028020 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:23:07.092778 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:23:07.092928 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:23:07.094955 sudo[2064]: pam_unix(sudo:session): session closed for user root Jan 29 12:23:07.097587 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:23:07.097739 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:23:07.119957 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:23:07.121431 auditctl[2067]: No rules Jan 29 12:23:07.121712 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:23:07.121858 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:23:07.123688 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:23:07.153130 augenrules[2085]: No rules Jan 29 12:23:07.153465 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:23:07.154137 sudo[2063]: pam_unix(sudo:session): session closed for user root Jan 29 12:23:07.154975 sshd[2060]: pam_unix(sshd:session): session closed for user core Jan 29 12:23:07.168960 systemd[1]: sshd@7-139.178.70.85:22-139.178.89.65:50522.service: Deactivated successfully. Jan 29 12:23:07.172966 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:23:07.176405 systemd-logind[1805]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:23:07.191379 systemd[1]: Started sshd@8-139.178.70.85:22-139.178.89.65:50534.service - OpenSSH per-connection server daemon (139.178.89.65:50534). Jan 29 12:23:07.194788 systemd-logind[1805]: Removed session 10. Jan 29 12:23:07.254910 sshd[2093]: Accepted publickey for core from 139.178.89.65 port 50534 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:23:07.255799 sshd[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:23:07.258801 systemd-logind[1805]: New session 11 of user core. Jan 29 12:23:07.268769 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:23:07.319243 sudo[2096]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:23:07.319397 sudo[2096]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:23:07.652751 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:23:07.652822 (dockerd)[2123]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:23:07.909826 dockerd[2123]: time="2025-01-29T12:23:07.909733675Z" level=info msg="Starting up" Jan 29 12:23:08.000940 dockerd[2123]: time="2025-01-29T12:23:08.000895050Z" level=info msg="Loading containers: start." Jan 29 12:23:08.077581 kernel: Initializing XFRM netlink socket Jan 29 12:23:08.138810 systemd-networkd[1609]: docker0: Link UP Jan 29 12:23:08.171441 dockerd[2123]: time="2025-01-29T12:23:08.171400717Z" level=info msg="Loading containers: done." Jan 29 12:23:08.179790 dockerd[2123]: time="2025-01-29T12:23:08.179743462Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:23:08.179790 dockerd[2123]: time="2025-01-29T12:23:08.179788783Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:23:08.179882 dockerd[2123]: time="2025-01-29T12:23:08.179840672Z" level=info msg="Daemon has completed initialization" Jan 29 12:23:08.180225 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1517671139-merged.mount: Deactivated successfully. Jan 29 12:23:08.193475 dockerd[2123]: time="2025-01-29T12:23:08.193420534Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:23:08.193556 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:23:08.931678 containerd[1825]: time="2025-01-29T12:23:08.931655831Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 12:23:09.550138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215699423.mount: Deactivated successfully. Jan 29 12:23:10.442272 containerd[1825]: time="2025-01-29T12:23:10.442245811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:10.442486 containerd[1825]: time="2025-01-29T12:23:10.442451143Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 12:23:10.442822 containerd[1825]: time="2025-01-29T12:23:10.442810396Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:10.444334 containerd[1825]: time="2025-01-29T12:23:10.444319373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:10.444952 containerd[1825]: time="2025-01-29T12:23:10.444912420Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.513233241s" Jan 29 12:23:10.444952 containerd[1825]: time="2025-01-29T12:23:10.444927711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 12:23:10.446297 containerd[1825]: time="2025-01-29T12:23:10.446285355Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 12:23:11.579828 containerd[1825]: time="2025-01-29T12:23:11.579799477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:11.580051 containerd[1825]: time="2025-01-29T12:23:11.580036715Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 12:23:11.580378 containerd[1825]: time="2025-01-29T12:23:11.580368577Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:11.581851 containerd[1825]: time="2025-01-29T12:23:11.581839454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:11.582901 containerd[1825]: time="2025-01-29T12:23:11.582889047Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.136587972s" Jan 29 12:23:11.582927 containerd[1825]: time="2025-01-29T12:23:11.582904905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 12:23:11.583152 containerd[1825]: time="2025-01-29T12:23:11.583141348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 12:23:12.720207 containerd[1825]: time="2025-01-29T12:23:12.720181942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:12.720443 containerd[1825]: time="2025-01-29T12:23:12.720427603Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 12:23:12.720795 containerd[1825]: time="2025-01-29T12:23:12.720782929Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:12.722347 containerd[1825]: time="2025-01-29T12:23:12.722308450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:12.722966 containerd[1825]: time="2025-01-29T12:23:12.722929323Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.139771023s" Jan 29 12:23:12.722966 containerd[1825]: time="2025-01-29T12:23:12.722946428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 12:23:12.723272 containerd[1825]: time="2025-01-29T12:23:12.723218703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 12:23:13.461245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557665637.mount: Deactivated successfully. Jan 29 12:23:13.643780 containerd[1825]: time="2025-01-29T12:23:13.643755328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:13.643992 containerd[1825]: time="2025-01-29T12:23:13.643972417Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 12:23:13.644340 containerd[1825]: time="2025-01-29T12:23:13.644329084Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:13.645206 containerd[1825]: time="2025-01-29T12:23:13.645193880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:13.645606 containerd[1825]: time="2025-01-29T12:23:13.645594573Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 922.360136ms" Jan 29 12:23:13.645632 containerd[1825]: time="2025-01-29T12:23:13.645610983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 12:23:13.645837 containerd[1825]: time="2025-01-29T12:23:13.645828010Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:23:14.134182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866670929.mount: Deactivated successfully. Jan 29 12:23:14.624361 containerd[1825]: time="2025-01-29T12:23:14.624336293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:14.624618 containerd[1825]: time="2025-01-29T12:23:14.624543196Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 12:23:14.624959 containerd[1825]: time="2025-01-29T12:23:14.624947943Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:14.626550 containerd[1825]: time="2025-01-29T12:23:14.626530076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:14.627228 containerd[1825]: time="2025-01-29T12:23:14.627209916Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 981.366866ms" Jan 29 12:23:14.627273 containerd[1825]: time="2025-01-29T12:23:14.627229980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:23:14.627465 containerd[1825]: time="2025-01-29T12:23:14.627453873Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 12:23:15.110301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:23:15.113034 containerd[1825]: time="2025-01-29T12:23:15.113016424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:15.113280 containerd[1825]: time="2025-01-29T12:23:15.113258732Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 12:23:15.113640 containerd[1825]: time="2025-01-29T12:23:15.113626296Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:15.114772 containerd[1825]: time="2025-01-29T12:23:15.114758093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:15.115244 containerd[1825]: time="2025-01-29T12:23:15.115229839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.75854ms" Jan 29 12:23:15.115272 containerd[1825]: time="2025-01-29T12:23:15.115249700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 12:23:15.115597 containerd[1825]: time="2025-01-29T12:23:15.115586446Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 12:23:15.129854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:23:15.130586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985531024.mount: Deactivated successfully. Jan 29 12:23:15.362446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:15.364608 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:23:15.383409 kubelet[2417]: E0129 12:23:15.383357 2417 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:23:15.384435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:23:15.384520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:23:15.651712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961680267.mount: Deactivated successfully. Jan 29 12:23:16.726387 containerd[1825]: time="2025-01-29T12:23:16.726334858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:16.726600 containerd[1825]: time="2025-01-29T12:23:16.726501050Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 12:23:16.727003 containerd[1825]: time="2025-01-29T12:23:16.726962460Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:16.728740 containerd[1825]: time="2025-01-29T12:23:16.728694026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:16.729420 containerd[1825]: time="2025-01-29T12:23:16.729378658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.613778725s" Jan 29 12:23:16.729420 containerd[1825]: time="2025-01-29T12:23:16.729395561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 12:23:18.679672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:18.694873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:23:18.707527 systemd[1]: Reloading requested from client PID 2538 ('systemctl') (unit session-11.scope)... Jan 29 12:23:18.707534 systemd[1]: Reloading... Jan 29 12:23:18.782599 zram_generator::config[2577]: No configuration found. Jan 29 12:23:18.849613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:23:18.909125 systemd[1]: Reloading finished in 201 ms. Jan 29 12:23:18.940144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:18.941092 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:23:18.942467 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:23:18.942590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:18.943400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:23:19.146387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:19.150498 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:23:19.171020 kubelet[2646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:23:19.171020 kubelet[2646]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:23:19.171020 kubelet[2646]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:23:19.171236 kubelet[2646]: I0129 12:23:19.171022 2646 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:23:19.546119 kubelet[2646]: I0129 12:23:19.546073 2646 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:23:19.546119 kubelet[2646]: I0129 12:23:19.546087 2646 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:23:19.546256 kubelet[2646]: I0129 12:23:19.546218 2646 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:23:19.561794 kubelet[2646]: I0129 12:23:19.561758 2646 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:23:19.562275 kubelet[2646]: E0129 12:23:19.562223 2646 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.85:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:23:19.567663 kubelet[2646]: E0129 12:23:19.567621 2646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:23:19.567663 kubelet[2646]: I0129 12:23:19.567636 2646 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:23:19.578028 kubelet[2646]: I0129 12:23:19.577994 2646 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:23:19.579195 kubelet[2646]: I0129 12:23:19.579164 2646 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:23:19.579287 kubelet[2646]: I0129 12:23:19.579238 2646 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:23:19.579400 kubelet[2646]: I0129 12:23:19.579264 2646 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-eb3371d08a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:23:19.579400 kubelet[2646]: I0129 12:23:19.579383 2646 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:23:19.579472 kubelet[2646]: I0129 12:23:19.579402 2646 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:23:19.579472 kubelet[2646]: I0129 12:23:19.579448 2646 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:23:19.581002 kubelet[2646]: I0129 12:23:19.580968 2646 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:23:19.582876 kubelet[2646]: I0129 12:23:19.582822 2646 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:23:19.582876 kubelet[2646]: I0129 12:23:19.582845 2646 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:23:19.583022 kubelet[2646]: I0129 12:23:19.582993 2646 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:23:19.585655 kubelet[2646]: W0129 12:23:19.585544 2646 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.85:6443: connect: connection refused Jan 29 12:23:19.585655 kubelet[2646]: E0129 12:23:19.585636 2646 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.85:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:23:19.586503 kubelet[2646]: W0129 12:23:19.586453 2646 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eb3371d08a&limit=500&resourceVersion=0": dial tcp 139.178.70.85:6443: connect: connection refused Jan 29 12:23:19.586503 kubelet[2646]: E0129 12:23:19.586483 2646 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eb3371d08a&limit=500&resourceVersion=0\": dial tcp 139.178.70.85:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:23:19.588111 kubelet[2646]: I0129 12:23:19.588067 2646 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:23:19.590085 kubelet[2646]: I0129 12:23:19.590046 2646 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:23:19.590111 kubelet[2646]: W0129 12:23:19.590092 2646 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:23:19.590451 kubelet[2646]: I0129 12:23:19.590388 2646 server.go:1269] "Started kubelet" Jan 29 12:23:19.590554 kubelet[2646]: I0129 12:23:19.590497 2646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:23:19.590554 kubelet[2646]: I0129 12:23:19.590529 2646 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:23:19.590659 kubelet[2646]: I0129 12:23:19.590646 2646 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:23:19.591370 kubelet[2646]: I0129 12:23:19.591346 2646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:23:19.591370 kubelet[2646]: I0129 12:23:19.591354 2646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:23:19.591430 kubelet[2646]: I0129 12:23:19.591391 2646 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:23:19.591430 kubelet[2646]: I0129 12:23:19.591420 2646 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:23:19.591430 kubelet[2646]: E0129 12:23:19.591408 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:19.591520 kubelet[2646]: I0129 12:23:19.591455 2646 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:23:19.591597 kubelet[2646]: E0129 12:23:19.591570 2646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eb3371d08a?timeout=10s\": dial tcp 139.178.70.85:6443: connect: connection refused" interval="200ms" Jan 29 12:23:19.591637 kubelet[2646]: W0129 12:23:19.591599 2646 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.85:6443: connect: connection refused Jan 29 12:23:19.591674 kubelet[2646]: E0129 12:23:19.591634 2646 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.85:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:23:19.591711 kubelet[2646]: I0129 12:23:19.591681 2646 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:23:19.591743 kubelet[2646]: I0129 12:23:19.591710 2646 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:23:19.591773 kubelet[2646]: E0129 12:23:19.591745 2646 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:23:19.591773 kubelet[2646]: I0129 12:23:19.591763 2646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:23:19.592239 kubelet[2646]: I0129 12:23:19.592231 2646 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:23:19.616304 kubelet[2646]: E0129 12:23:19.614101 2646 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.85:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-eb3371d08a.181f294ecbb8d64a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-eb3371d08a,UID:ci-4081.3.0-a-eb3371d08a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-eb3371d08a,},FirstTimestamp:2025-01-29 12:23:19.590377034 +0000 UTC m=+0.437988244,LastTimestamp:2025-01-29 12:23:19.590377034 +0000 UTC m=+0.437988244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-eb3371d08a,}" Jan 29 12:23:19.620247 kubelet[2646]: I0129 12:23:19.620226 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:23:19.620886 kubelet[2646]: I0129 12:23:19.620871 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:23:19.620886 kubelet[2646]: I0129 12:23:19.620888 2646 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:23:19.620964 kubelet[2646]: I0129 12:23:19.620900 2646 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:23:19.620964 kubelet[2646]: E0129 12:23:19.620926 2646 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:23:19.622307 kubelet[2646]: W0129 12:23:19.622279 2646 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.85:6443: connect: connection refused Jan 29 12:23:19.622342 kubelet[2646]: E0129 12:23:19.622315 2646 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.85:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:23:19.692729 kubelet[2646]: E0129 12:23:19.692615 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:19.721236 kubelet[2646]: E0129 12:23:19.721121 2646 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:23:19.759364 kubelet[2646]: I0129 12:23:19.759267 2646 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:23:19.759364 kubelet[2646]: I0129 12:23:19.759308 2646 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:23:19.759364 kubelet[2646]: I0129 12:23:19.759352 2646 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:23:19.761259 kubelet[2646]: I0129 12:23:19.761215 2646 policy_none.go:49] "None policy: Start" Jan 29 12:23:19.761907 kubelet[2646]: I0129 12:23:19.761862 2646 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:23:19.761907 kubelet[2646]: I0129 12:23:19.761890 2646 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:23:19.766770 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:23:19.781371 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:23:19.783438 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:23:19.792382 kubelet[2646]: E0129 12:23:19.792336 2646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eb3371d08a?timeout=10s\": dial tcp 139.178.70.85:6443: connect: connection refused" interval="400ms" Jan 29 12:23:19.793479 kubelet[2646]: E0129 12:23:19.793463 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:19.794108 kubelet[2646]: I0129 12:23:19.794068 2646 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:23:19.794201 kubelet[2646]: I0129 12:23:19.794192 2646 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:23:19.794258 kubelet[2646]: I0129 12:23:19.794201 2646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:23:19.794319 kubelet[2646]: I0129 12:23:19.794308 2646 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:23:19.794827 kubelet[2646]: E0129 12:23:19.794815 2646 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:19.899587 kubelet[2646]: I0129 12:23:19.899342 2646 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:19.900346 kubelet[2646]: E0129 12:23:19.900240 2646 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.85:6443/api/v1/nodes\": dial tcp 139.178.70.85:6443: connect: connection refused" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:19.927651 systemd[1]: Created slice kubepods-burstable-podd6a741472c805d323fec691ee055b1f8.slice - libcontainer container kubepods-burstable-podd6a741472c805d323fec691ee055b1f8.slice. Jan 29 12:23:19.962930 systemd[1]: Created slice kubepods-burstable-pod2cbb01ddd525c95890aba532bc65f29b.slice - libcontainer container kubepods-burstable-pod2cbb01ddd525c95890aba532bc65f29b.slice. Jan 29 12:23:19.984959 systemd[1]: Created slice kubepods-burstable-pod2124dfa180d1327c58dc42d00ab77fdc.slice - libcontainer container kubepods-burstable-pod2124dfa180d1327c58dc42d00ab77fdc.slice. Jan 29 12:23:20.094944 kubelet[2646]: I0129 12:23:20.094794 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.094944 kubelet[2646]: I0129 12:23:20.094916 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.095321 kubelet[2646]: I0129 12:23:20.094998 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6a741472c805d323fec691ee055b1f8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-eb3371d08a\" (UID: \"d6a741472c805d323fec691ee055b1f8\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.095321 kubelet[2646]: I0129 12:23:20.095049 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cbb01ddd525c95890aba532bc65f29b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eb3371d08a\" (UID: \"2cbb01ddd525c95890aba532bc65f29b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.095321 kubelet[2646]: I0129 12:23:20.095100 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cbb01ddd525c95890aba532bc65f29b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eb3371d08a\" (UID: \"2cbb01ddd525c95890aba532bc65f29b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.095321 kubelet[2646]: I0129 12:23:20.095147 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cbb01ddd525c95890aba532bc65f29b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-eb3371d08a\" (UID: \"2cbb01ddd525c95890aba532bc65f29b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.095321 kubelet[2646]: I0129 12:23:20.095195 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.095847 kubelet[2646]: I0129 12:23:20.095239 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.095847 kubelet[2646]: I0129 12:23:20.095330 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.105368 kubelet[2646]: I0129 12:23:20.105304 2646 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.106108 kubelet[2646]: E0129 12:23:20.105984 2646 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.85:6443/api/v1/nodes\": dial tcp 139.178.70.85:6443: connect: connection refused" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.193321 kubelet[2646]: E0129 12:23:20.193072 2646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eb3371d08a?timeout=10s\": dial tcp 139.178.70.85:6443: connect: connection refused" interval="800ms" Jan 29 12:23:20.257358 containerd[1825]: time="2025-01-29T12:23:20.257226386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-eb3371d08a,Uid:d6a741472c805d323fec691ee055b1f8,Namespace:kube-system,Attempt:0,}" Jan 29 12:23:20.279834 containerd[1825]: time="2025-01-29T12:23:20.279774857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-eb3371d08a,Uid:2cbb01ddd525c95890aba532bc65f29b,Namespace:kube-system,Attempt:0,}" Jan 29 12:23:20.290326 containerd[1825]: time="2025-01-29T12:23:20.290266766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-eb3371d08a,Uid:2124dfa180d1327c58dc42d00ab77fdc,Namespace:kube-system,Attempt:0,}" Jan 29 12:23:20.507376 kubelet[2646]: I0129 12:23:20.507286 2646 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.507486 kubelet[2646]: E0129 12:23:20.507473 2646 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.85:6443/api/v1/nodes\": dial tcp 139.178.70.85:6443: connect: connection refused" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:20.758927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount212008289.mount: Deactivated successfully. Jan 29 12:23:20.777871 containerd[1825]: time="2025-01-29T12:23:20.777812288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:23:20.778405 containerd[1825]: time="2025-01-29T12:23:20.778315658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:23:20.778708 containerd[1825]: time="2025-01-29T12:23:20.778692587Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:23:20.779106 containerd[1825]: time="2025-01-29T12:23:20.779091972Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:23:20.779575 containerd[1825]: time="2025-01-29T12:23:20.779541707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:23:20.779799 containerd[1825]: time="2025-01-29T12:23:20.779785285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 12:23:20.780079 containerd[1825]: time="2025-01-29T12:23:20.780064024Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:23:20.781519 containerd[1825]: time="2025-01-29T12:23:20.781503087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:23:20.782580 containerd[1825]: time="2025-01-29T12:23:20.782546479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.165275ms" Jan 29 12:23:20.783021 containerd[1825]: time="2025-01-29T12:23:20.783006858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.182685ms" Jan 29 12:23:20.783403 containerd[1825]: time="2025-01-29T12:23:20.783388881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.094342ms" Jan 29 12:23:20.799544 kubelet[2646]: W0129 12:23:20.799509 2646 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eb3371d08a&limit=500&resourceVersion=0": dial tcp 139.178.70.85:6443: connect: connection refused Jan 29 12:23:20.799622 kubelet[2646]: E0129 12:23:20.799556 2646 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-eb3371d08a&limit=500&resourceVersion=0\": dial tcp 139.178.70.85:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:23:20.864016 containerd[1825]: time="2025-01-29T12:23:20.863962860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:20.864016 containerd[1825]: time="2025-01-29T12:23:20.863995250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:20.864016 containerd[1825]: time="2025-01-29T12:23:20.864002724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:20.864016 containerd[1825]: time="2025-01-29T12:23:20.863790249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:20.864016 containerd[1825]: time="2025-01-29T12:23:20.864006263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:20.864016 containerd[1825]: time="2025-01-29T12:23:20.864015624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:20.864198 containerd[1825]: time="2025-01-29T12:23:20.864048503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:20.864198 containerd[1825]: time="2025-01-29T12:23:20.864059498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:20.864198 containerd[1825]: time="2025-01-29T12:23:20.864162560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:20.864198 containerd[1825]: time="2025-01-29T12:23:20.864189821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:20.864275 containerd[1825]: time="2025-01-29T12:23:20.864200132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:20.864297 containerd[1825]: time="2025-01-29T12:23:20.864281642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:20.887891 systemd[1]: Started cri-containerd-0b05fc67e523f9d5c09a17594973a6e5c9fb143f36ad02d42384e853abbe916f.scope - libcontainer container 0b05fc67e523f9d5c09a17594973a6e5c9fb143f36ad02d42384e853abbe916f. Jan 29 12:23:20.888578 systemd[1]: Started cri-containerd-6ecd19dde15bbd93d54435809b80bf94456c68958735d445765b1a86911df71f.scope - libcontainer container 6ecd19dde15bbd93d54435809b80bf94456c68958735d445765b1a86911df71f. Jan 29 12:23:20.889275 systemd[1]: Started cri-containerd-78c55e8fecc80961e777be58e26e68452443674859575747bec4d3e13c16f286.scope - libcontainer container 78c55e8fecc80961e777be58e26e68452443674859575747bec4d3e13c16f286. Jan 29 12:23:20.910866 containerd[1825]: time="2025-01-29T12:23:20.910844780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-eb3371d08a,Uid:2124dfa180d1327c58dc42d00ab77fdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b05fc67e523f9d5c09a17594973a6e5c9fb143f36ad02d42384e853abbe916f\"" Jan 29 12:23:20.910943 containerd[1825]: time="2025-01-29T12:23:20.910892877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-eb3371d08a,Uid:d6a741472c805d323fec691ee055b1f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ecd19dde15bbd93d54435809b80bf94456c68958735d445765b1a86911df71f\"" Jan 29 12:23:20.912785 containerd[1825]: time="2025-01-29T12:23:20.912763615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-eb3371d08a,Uid:2cbb01ddd525c95890aba532bc65f29b,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c55e8fecc80961e777be58e26e68452443674859575747bec4d3e13c16f286\"" Jan 29 12:23:20.913525 containerd[1825]: time="2025-01-29T12:23:20.913506657Z" level=info msg="CreateContainer within sandbox \"6ecd19dde15bbd93d54435809b80bf94456c68958735d445765b1a86911df71f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:23:20.913591 containerd[1825]: time="2025-01-29T12:23:20.913580242Z" level=info msg="CreateContainer within sandbox \"0b05fc67e523f9d5c09a17594973a6e5c9fb143f36ad02d42384e853abbe916f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:23:20.913627 containerd[1825]: time="2025-01-29T12:23:20.913614417Z" level=info msg="CreateContainer within sandbox \"78c55e8fecc80961e777be58e26e68452443674859575747bec4d3e13c16f286\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:23:20.920344 containerd[1825]: time="2025-01-29T12:23:20.920290783Z" level=info msg="CreateContainer within sandbox \"6ecd19dde15bbd93d54435809b80bf94456c68958735d445765b1a86911df71f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b395c24aef381f4df86c6e68755422197e4c0e4a450392d8360fa31b5862fa5d\"" Jan 29 12:23:20.920605 containerd[1825]: time="2025-01-29T12:23:20.920593228Z" level=info msg="StartContainer for \"b395c24aef381f4df86c6e68755422197e4c0e4a450392d8360fa31b5862fa5d\"" Jan 29 12:23:20.921141 containerd[1825]: time="2025-01-29T12:23:20.921125928Z" level=info msg="CreateContainer within sandbox \"0b05fc67e523f9d5c09a17594973a6e5c9fb143f36ad02d42384e853abbe916f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9e27038ae76d944af57c331512a59a216eeafae038b179b0c37ceb551a90d86c\"" Jan 29 12:23:20.921282 containerd[1825]: time="2025-01-29T12:23:20.921272465Z" level=info msg="StartContainer for \"9e27038ae76d944af57c331512a59a216eeafae038b179b0c37ceb551a90d86c\"" Jan 29 12:23:20.921528 containerd[1825]: time="2025-01-29T12:23:20.921515445Z" level=info msg="CreateContainer within sandbox \"78c55e8fecc80961e777be58e26e68452443674859575747bec4d3e13c16f286\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3523884dc8bf5d9e904d1c73348965431b1ff8a2c3c9d59b240a04b00df42e6d\"" Jan 29 12:23:20.921671 containerd[1825]: time="2025-01-29T12:23:20.921661696Z" level=info msg="StartContainer for \"3523884dc8bf5d9e904d1c73348965431b1ff8a2c3c9d59b240a04b00df42e6d\"" Jan 29 12:23:20.953832 systemd[1]: Started cri-containerd-3523884dc8bf5d9e904d1c73348965431b1ff8a2c3c9d59b240a04b00df42e6d.scope - libcontainer container 3523884dc8bf5d9e904d1c73348965431b1ff8a2c3c9d59b240a04b00df42e6d. Jan 29 12:23:20.954430 systemd[1]: Started cri-containerd-9e27038ae76d944af57c331512a59a216eeafae038b179b0c37ceb551a90d86c.scope - libcontainer container 9e27038ae76d944af57c331512a59a216eeafae038b179b0c37ceb551a90d86c. Jan 29 12:23:20.955013 systemd[1]: Started cri-containerd-b395c24aef381f4df86c6e68755422197e4c0e4a450392d8360fa31b5862fa5d.scope - libcontainer container b395c24aef381f4df86c6e68755422197e4c0e4a450392d8360fa31b5862fa5d. Jan 29 12:23:20.980254 containerd[1825]: time="2025-01-29T12:23:20.980199675Z" level=info msg="StartContainer for \"3523884dc8bf5d9e904d1c73348965431b1ff8a2c3c9d59b240a04b00df42e6d\" returns successfully" Jan 29 12:23:20.990052 containerd[1825]: time="2025-01-29T12:23:20.990026733Z" level=info msg="StartContainer for \"9e27038ae76d944af57c331512a59a216eeafae038b179b0c37ceb551a90d86c\" returns successfully" Jan 29 12:23:20.990137 containerd[1825]: time="2025-01-29T12:23:20.990026713Z" level=info msg="StartContainer for \"b395c24aef381f4df86c6e68755422197e4c0e4a450392d8360fa31b5862fa5d\" returns successfully" Jan 29 12:23:20.993710 kubelet[2646]: E0129 12:23:20.993657 2646 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-eb3371d08a?timeout=10s\": dial tcp 139.178.70.85:6443: connect: connection refused" interval="1.6s" Jan 29 12:23:21.309055 kubelet[2646]: I0129 12:23:21.309037 2646 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:21.620831 kubelet[2646]: I0129 12:23:21.620768 2646 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:21.620831 kubelet[2646]: E0129 12:23:21.620793 2646 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-a-eb3371d08a\": node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:21.625843 kubelet[2646]: E0129 12:23:21.625827 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:21.726726 kubelet[2646]: E0129 12:23:21.726643 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:21.827204 kubelet[2646]: E0129 12:23:21.827122 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:21.928331 kubelet[2646]: E0129 12:23:21.928134 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.029103 kubelet[2646]: E0129 12:23:22.029026 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.129821 kubelet[2646]: E0129 12:23:22.129748 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.231096 kubelet[2646]: E0129 12:23:22.230910 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.332128 kubelet[2646]: E0129 12:23:22.332051 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.433209 kubelet[2646]: E0129 12:23:22.433137 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.534214 kubelet[2646]: E0129 12:23:22.534040 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.634881 kubelet[2646]: E0129 12:23:22.634786 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.735745 kubelet[2646]: E0129 12:23:22.735681 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.835932 kubelet[2646]: E0129 12:23:22.835840 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:22.936802 kubelet[2646]: E0129 12:23:22.936697 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.037672 kubelet[2646]: E0129 12:23:23.037623 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.138824 kubelet[2646]: E0129 12:23:23.138651 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.239779 kubelet[2646]: E0129 12:23:23.239680 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.340701 kubelet[2646]: E0129 12:23:23.340586 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.441981 kubelet[2646]: E0129 12:23:23.441783 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.543041 kubelet[2646]: E0129 12:23:23.542913 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.643868 kubelet[2646]: E0129 12:23:23.643795 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.744772 kubelet[2646]: E0129 12:23:23.744530 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:23.844841 kubelet[2646]: E0129 12:23:23.844718 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:24.124068 systemd[1]: Reloading requested from client PID 2963 ('systemctl') (unit session-11.scope)... Jan 29 12:23:24.124101 systemd[1]: Reloading... Jan 29 12:23:24.191597 zram_generator::config[3002]: No configuration found. Jan 29 12:23:24.278627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:23:24.344918 systemd[1]: Reloading finished in 219 ms. Jan 29 12:23:24.393422 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:23:24.399436 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:23:24.399545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:24.416719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:23:24.677340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:23:24.679855 (kubelet)[3066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:23:24.698540 kubelet[3066]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:23:24.698540 kubelet[3066]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:23:24.698540 kubelet[3066]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:23:24.698766 kubelet[3066]: I0129 12:23:24.698532 3066 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:23:24.701925 kubelet[3066]: I0129 12:23:24.701909 3066 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 12:23:24.701925 kubelet[3066]: I0129 12:23:24.701922 3066 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:23:24.702063 kubelet[3066]: I0129 12:23:24.702057 3066 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 12:23:24.702874 kubelet[3066]: I0129 12:23:24.702864 3066 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:23:24.704055 kubelet[3066]: I0129 12:23:24.704048 3066 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:23:24.705513 kubelet[3066]: E0129 12:23:24.705500 3066 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 12:23:24.705513 kubelet[3066]: I0129 12:23:24.705513 3066 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 12:23:24.712516 kubelet[3066]: I0129 12:23:24.712472 3066 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:23:24.712598 kubelet[3066]: I0129 12:23:24.712531 3066 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:23:24.712598 kubelet[3066]: I0129 12:23:24.712589 3066 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:23:24.712722 kubelet[3066]: I0129 12:23:24.712603 3066 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-eb3371d08a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:23:24.712722 kubelet[3066]: I0129 12:23:24.712702 3066 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:23:24.712722 kubelet[3066]: I0129 12:23:24.712708 3066 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 12:23:24.712806 kubelet[3066]: I0129 12:23:24.712726 3066 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:23:24.712806 kubelet[3066]: I0129 12:23:24.712779 3066 kubelet.go:408] "Attempting to sync node with API server" Jan 29 12:23:24.712806 kubelet[3066]: I0129 12:23:24.712794 3066 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:23:24.712857 kubelet[3066]: I0129 12:23:24.712810 3066 kubelet.go:314] "Adding apiserver pod source" Jan 29 12:23:24.712857 kubelet[3066]: I0129 12:23:24.712818 3066 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:23:24.713155 kubelet[3066]: I0129 12:23:24.713142 3066 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:23:24.713406 kubelet[3066]: I0129 12:23:24.713376 3066 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:23:24.713630 kubelet[3066]: I0129 12:23:24.713597 3066 server.go:1269] "Started kubelet" Jan 29 12:23:24.713665 kubelet[3066]: I0129 12:23:24.713628 3066 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:23:24.713665 kubelet[3066]: I0129 12:23:24.713632 3066 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:23:24.713758 kubelet[3066]: I0129 12:23:24.713747 3066 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:23:24.714368 kubelet[3066]: I0129 12:23:24.714360 3066 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:23:24.714408 kubelet[3066]: I0129 12:23:24.714390 3066 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 12:23:24.714408 kubelet[3066]: I0129 12:23:24.714402 3066 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:23:24.714468 kubelet[3066]: I0129 12:23:24.714423 3066 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:23:24.714498 kubelet[3066]: E0129 12:23:24.714467 3066 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-eb3371d08a\" not found" Jan 29 12:23:24.714525 kubelet[3066]: I0129 12:23:24.714496 3066 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:23:24.714525 kubelet[3066]: I0129 12:23:24.714518 3066 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:23:24.714683 kubelet[3066]: I0129 12:23:24.714674 3066 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:23:24.714749 kubelet[3066]: I0129 12:23:24.714736 3066 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:23:24.714783 kubelet[3066]: E0129 12:23:24.714738 3066 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:23:24.716456 kubelet[3066]: I0129 12:23:24.716445 3066 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:23:24.719655 kubelet[3066]: I0129 12:23:24.719631 3066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:23:24.720187 kubelet[3066]: I0129 12:23:24.720177 3066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:23:24.720246 kubelet[3066]: I0129 12:23:24.720196 3066 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:23:24.720246 kubelet[3066]: I0129 12:23:24.720210 3066 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 12:23:24.720246 kubelet[3066]: E0129 12:23:24.720232 3066 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:23:24.730270 kubelet[3066]: I0129 12:23:24.730256 3066 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:23:24.730270 kubelet[3066]: I0129 12:23:24.730265 3066 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:23:24.730270 kubelet[3066]: I0129 12:23:24.730275 3066 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:23:24.730381 kubelet[3066]: I0129 12:23:24.730361 3066 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:23:24.730381 kubelet[3066]: I0129 12:23:24.730368 3066 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:23:24.730381 kubelet[3066]: I0129 12:23:24.730379 3066 policy_none.go:49] "None policy: Start" Jan 29 12:23:24.730658 kubelet[3066]: I0129 12:23:24.730652 3066 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:23:24.730688 kubelet[3066]: I0129 12:23:24.730663 3066 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:23:24.730737 kubelet[3066]: I0129 12:23:24.730731 3066 state_mem.go:75] "Updated machine memory state" Jan 29 12:23:24.732761 kubelet[3066]: I0129 12:23:24.732723 3066 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:23:24.732844 kubelet[3066]: I0129 12:23:24.732809 3066 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:23:24.732844 kubelet[3066]: I0129 12:23:24.732817 3066 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:23:24.732920 kubelet[3066]: I0129 12:23:24.732908 3066 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:23:24.834863 kubelet[3066]: W0129 12:23:24.834780 3066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:23:24.834863 kubelet[3066]: W0129 12:23:24.834829 3066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:23:24.834863 kubelet[3066]: W0129 12:23:24.834843 3066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:23:24.840406 kubelet[3066]: I0129 12:23:24.840346 3066 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:24.849594 kubelet[3066]: I0129 12:23:24.849515 3066 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:24.849823 kubelet[3066]: I0129 12:23:24.849737 3066 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.015960 kubelet[3066]: I0129 12:23:25.015746 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6a741472c805d323fec691ee055b1f8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-eb3371d08a\" (UID: \"d6a741472c805d323fec691ee055b1f8\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.015960 kubelet[3066]: I0129 12:23:25.015830 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cbb01ddd525c95890aba532bc65f29b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eb3371d08a\" (UID: \"2cbb01ddd525c95890aba532bc65f29b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.015960 kubelet[3066]: I0129 12:23:25.015893 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cbb01ddd525c95890aba532bc65f29b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-eb3371d08a\" (UID: \"2cbb01ddd525c95890aba532bc65f29b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.016607 kubelet[3066]: I0129 12:23:25.016030 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.016607 kubelet[3066]: I0129 12:23:25.016158 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.016607 kubelet[3066]: I0129 12:23:25.016247 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.016607 kubelet[3066]: I0129 12:23:25.016332 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.016607 kubelet[3066]: I0129 12:23:25.016418 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cbb01ddd525c95890aba532bc65f29b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-eb3371d08a\" (UID: \"2cbb01ddd525c95890aba532bc65f29b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.017152 kubelet[3066]: I0129 12:23:25.016512 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2124dfa180d1327c58dc42d00ab77fdc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" (UID: \"2124dfa180d1327c58dc42d00ab77fdc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.713880 kubelet[3066]: I0129 12:23:25.713850 3066 apiserver.go:52] "Watching apiserver" Jan 29 12:23:25.726454 kubelet[3066]: W0129 12:23:25.726432 3066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:23:25.726454 kubelet[3066]: W0129 12:23:25.726443 3066 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:23:25.726598 kubelet[3066]: E0129 12:23:25.726487 3066 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-eb3371d08a\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.726598 kubelet[3066]: E0129 12:23:25.726560 3066 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-eb3371d08a\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:25.745323 kubelet[3066]: I0129 12:23:25.745275 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-eb3371d08a" podStartSLOduration=1.745261176 podStartE2EDuration="1.745261176s" podCreationTimestamp="2025-01-29 12:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:23:25.740346724 +0000 UTC m=+1.058803152" watchObservedRunningTime="2025-01-29 12:23:25.745261176 +0000 UTC m=+1.063717600" Jan 29 12:23:25.749659 kubelet[3066]: I0129 12:23:25.749630 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-eb3371d08a" podStartSLOduration=1.74962146 podStartE2EDuration="1.74962146s" podCreationTimestamp="2025-01-29 12:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:23:25.745378026 +0000 UTC m=+1.063834455" watchObservedRunningTime="2025-01-29 12:23:25.74962146 +0000 UTC m=+1.068077889" Jan 29 12:23:25.749747 kubelet[3066]: I0129 12:23:25.749715 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-eb3371d08a" podStartSLOduration=1.74971093 podStartE2EDuration="1.74971093s" podCreationTimestamp="2025-01-29 12:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:23:25.749697949 +0000 UTC m=+1.068154379" watchObservedRunningTime="2025-01-29 12:23:25.74971093 +0000 UTC m=+1.068167355" Jan 29 12:23:25.814779 kubelet[3066]: I0129 12:23:25.814757 3066 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:23:28.209620 kubelet[3066]: I0129 12:23:28.209492 3066 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:23:28.210445 containerd[1825]: time="2025-01-29T12:23:28.210307691Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:23:28.211110 kubelet[3066]: I0129 12:23:28.210769 3066 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:23:28.304363 systemd[1]: Created slice kubepods-besteffort-podc5e4cc4c_46a2_4ced_ac4f_6a9111549cfa.slice - libcontainer container kubepods-besteffort-podc5e4cc4c_46a2_4ced_ac4f_6a9111549cfa.slice. Jan 29 12:23:28.337320 kubelet[3066]: I0129 12:23:28.337277 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa-kube-proxy\") pod \"kube-proxy-7zdv6\" (UID: \"c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa\") " pod="kube-system/kube-proxy-7zdv6" Jan 29 12:23:28.337320 kubelet[3066]: I0129 12:23:28.337326 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa-lib-modules\") pod \"kube-proxy-7zdv6\" (UID: \"c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa\") " pod="kube-system/kube-proxy-7zdv6" Jan 29 12:23:28.337557 kubelet[3066]: I0129 12:23:28.337359 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mlx\" (UniqueName: \"kubernetes.io/projected/c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa-kube-api-access-x6mlx\") pod \"kube-proxy-7zdv6\" (UID: \"c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa\") " pod="kube-system/kube-proxy-7zdv6" Jan 29 12:23:28.337557 kubelet[3066]: I0129 12:23:28.337389 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa-xtables-lock\") pod \"kube-proxy-7zdv6\" (UID: \"c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa\") " pod="kube-system/kube-proxy-7zdv6" Jan 29 12:23:28.451341 kubelet[3066]: E0129 12:23:28.451242 3066 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:23:28.451341 kubelet[3066]: E0129 12:23:28.451300 3066 projected.go:194] Error preparing data for projected volume kube-api-access-x6mlx for pod kube-system/kube-proxy-7zdv6: configmap "kube-root-ca.crt" not found Jan 29 12:23:28.451731 kubelet[3066]: E0129 12:23:28.451438 3066 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa-kube-api-access-x6mlx podName:c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa nodeName:}" failed. No retries permitted until 2025-01-29 12:23:28.951381849 +0000 UTC m=+4.269838351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x6mlx" (UniqueName: "kubernetes.io/projected/c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa-kube-api-access-x6mlx") pod "kube-proxy-7zdv6" (UID: "c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa") : configmap "kube-root-ca.crt" not found Jan 29 12:23:28.752776 sudo[2096]: pam_unix(sudo:session): session closed for user root Jan 29 12:23:28.753708 sshd[2093]: pam_unix(sshd:session): session closed for user core Jan 29 12:23:28.755284 systemd[1]: sshd@8-139.178.70.85:22-139.178.89.65:50534.service: Deactivated successfully. Jan 29 12:23:28.756136 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:23:28.756215 systemd[1]: session-11.scope: Consumed 3.376s CPU time, 166.9M memory peak, 0B memory swap peak. Jan 29 12:23:28.756817 systemd-logind[1805]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:23:28.757374 systemd-logind[1805]: Removed session 11. Jan 29 12:23:29.222590 containerd[1825]: time="2025-01-29T12:23:29.222480857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zdv6,Uid:c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa,Namespace:kube-system,Attempt:0,}" Jan 29 12:23:29.233646 containerd[1825]: time="2025-01-29T12:23:29.233602754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:29.233903 containerd[1825]: time="2025-01-29T12:23:29.233840072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:29.233903 containerd[1825]: time="2025-01-29T12:23:29.233866783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:29.234053 containerd[1825]: time="2025-01-29T12:23:29.233942493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:29.260712 systemd[1]: Started cri-containerd-a2da5a4ff5cf544903c01b68231004090d55a9afdcf0c766250564ab4d0b1fef.scope - libcontainer container a2da5a4ff5cf544903c01b68231004090d55a9afdcf0c766250564ab4d0b1fef. Jan 29 12:23:29.275483 containerd[1825]: time="2025-01-29T12:23:29.275451098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zdv6,Uid:c5e4cc4c-46a2-4ced-ac4f-6a9111549cfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2da5a4ff5cf544903c01b68231004090d55a9afdcf0c766250564ab4d0b1fef\"" Jan 29 12:23:29.277398 containerd[1825]: time="2025-01-29T12:23:29.277370309Z" level=info msg="CreateContainer within sandbox \"a2da5a4ff5cf544903c01b68231004090d55a9afdcf0c766250564ab4d0b1fef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:23:29.279992 systemd[1]: Created slice kubepods-besteffort-poda8476bb6_f83e_4a3c_9941_c4d553c41caa.slice - libcontainer container kubepods-besteffort-poda8476bb6_f83e_4a3c_9941_c4d553c41caa.slice. Jan 29 12:23:29.285518 containerd[1825]: time="2025-01-29T12:23:29.285498931Z" level=info msg="CreateContainer within sandbox \"a2da5a4ff5cf544903c01b68231004090d55a9afdcf0c766250564ab4d0b1fef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc4297c4f2f70323b185f42049429f9ff8ad762cf232ce820b28c4adc38854b2\"" Jan 29 12:23:29.285843 containerd[1825]: time="2025-01-29T12:23:29.285830066Z" level=info msg="StartContainer for \"cc4297c4f2f70323b185f42049429f9ff8ad762cf232ce820b28c4adc38854b2\"" Jan 29 12:23:29.303752 systemd[1]: Started cri-containerd-cc4297c4f2f70323b185f42049429f9ff8ad762cf232ce820b28c4adc38854b2.scope - libcontainer container cc4297c4f2f70323b185f42049429f9ff8ad762cf232ce820b28c4adc38854b2. Jan 29 12:23:29.316058 containerd[1825]: time="2025-01-29T12:23:29.316031016Z" level=info msg="StartContainer for \"cc4297c4f2f70323b185f42049429f9ff8ad762cf232ce820b28c4adc38854b2\" returns successfully" Jan 29 12:23:29.344546 kubelet[3066]: I0129 12:23:29.344520 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdtmv\" (UniqueName: \"kubernetes.io/projected/a8476bb6-f83e-4a3c-9941-c4d553c41caa-kube-api-access-sdtmv\") pod \"tigera-operator-76c4976dd7-h2qmx\" (UID: \"a8476bb6-f83e-4a3c-9941-c4d553c41caa\") " pod="tigera-operator/tigera-operator-76c4976dd7-h2qmx" Jan 29 12:23:29.344546 kubelet[3066]: I0129 12:23:29.344551 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a8476bb6-f83e-4a3c-9941-c4d553c41caa-var-lib-calico\") pod \"tigera-operator-76c4976dd7-h2qmx\" (UID: \"a8476bb6-f83e-4a3c-9941-c4d553c41caa\") " pod="tigera-operator/tigera-operator-76c4976dd7-h2qmx" Jan 29 12:23:29.582414 containerd[1825]: time="2025-01-29T12:23:29.582375432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-h2qmx,Uid:a8476bb6-f83e-4a3c-9941-c4d553c41caa,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:23:29.593543 containerd[1825]: time="2025-01-29T12:23:29.593484729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:29.593744 containerd[1825]: time="2025-01-29T12:23:29.593701528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:29.593744 containerd[1825]: time="2025-01-29T12:23:29.593711837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:29.593791 containerd[1825]: time="2025-01-29T12:23:29.593752494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:29.616744 systemd[1]: Started cri-containerd-440b4720411eb8c23134dce4e7585808076fcf149b4f9a5ef07ad94bcdf52dd4.scope - libcontainer container 440b4720411eb8c23134dce4e7585808076fcf149b4f9a5ef07ad94bcdf52dd4. Jan 29 12:23:29.643143 containerd[1825]: time="2025-01-29T12:23:29.643115683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-h2qmx,Uid:a8476bb6-f83e-4a3c-9941-c4d553c41caa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"440b4720411eb8c23134dce4e7585808076fcf149b4f9a5ef07ad94bcdf52dd4\"" Jan 29 12:23:29.644031 containerd[1825]: time="2025-01-29T12:23:29.644016071Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:23:31.011789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771990342.mount: Deactivated successfully. Jan 29 12:23:31.218252 containerd[1825]: time="2025-01-29T12:23:31.218200433Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:31.218461 containerd[1825]: time="2025-01-29T12:23:31.218402674Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:23:31.218747 containerd[1825]: time="2025-01-29T12:23:31.218707801Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:31.220122 containerd[1825]: time="2025-01-29T12:23:31.220077042Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:31.220406 containerd[1825]: time="2025-01-29T12:23:31.220362280Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.576326137s" Jan 29 12:23:31.220406 containerd[1825]: time="2025-01-29T12:23:31.220379142Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:23:31.221353 containerd[1825]: time="2025-01-29T12:23:31.221311850Z" level=info msg="CreateContainer within sandbox \"440b4720411eb8c23134dce4e7585808076fcf149b4f9a5ef07ad94bcdf52dd4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:23:31.225000 containerd[1825]: time="2025-01-29T12:23:31.224957918Z" level=info msg="CreateContainer within sandbox \"440b4720411eb8c23134dce4e7585808076fcf149b4f9a5ef07ad94bcdf52dd4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"11d7b0ab702ef1ba1e7a8f3b5ee6a7fe33fdec769e00ee8f598169ff10cc9c93\"" Jan 29 12:23:31.225265 containerd[1825]: time="2025-01-29T12:23:31.225213121Z" level=info msg="StartContainer for \"11d7b0ab702ef1ba1e7a8f3b5ee6a7fe33fdec769e00ee8f598169ff10cc9c93\"" Jan 29 12:23:31.243831 systemd[1]: Started cri-containerd-11d7b0ab702ef1ba1e7a8f3b5ee6a7fe33fdec769e00ee8f598169ff10cc9c93.scope - libcontainer container 11d7b0ab702ef1ba1e7a8f3b5ee6a7fe33fdec769e00ee8f598169ff10cc9c93. Jan 29 12:23:31.258448 containerd[1825]: time="2025-01-29T12:23:31.258426553Z" level=info msg="StartContainer for \"11d7b0ab702ef1ba1e7a8f3b5ee6a7fe33fdec769e00ee8f598169ff10cc9c93\" returns successfully" Jan 29 12:23:31.756609 kubelet[3066]: I0129 12:23:31.756516 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7zdv6" podStartSLOduration=3.756505902 podStartE2EDuration="3.756505902s" podCreationTimestamp="2025-01-29 12:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:23:29.754733513 +0000 UTC m=+5.073190012" watchObservedRunningTime="2025-01-29 12:23:31.756505902 +0000 UTC m=+7.074962324" Jan 29 12:23:31.756937 kubelet[3066]: I0129 12:23:31.756615 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-h2qmx" podStartSLOduration=1.179615624 podStartE2EDuration="2.756611967s" podCreationTimestamp="2025-01-29 12:23:29 +0000 UTC" firstStartedPulling="2025-01-29 12:23:29.643760338 +0000 UTC m=+4.962216762" lastFinishedPulling="2025-01-29 12:23:31.220756683 +0000 UTC m=+6.539213105" observedRunningTime="2025-01-29 12:23:31.756502197 +0000 UTC m=+7.074958623" watchObservedRunningTime="2025-01-29 12:23:31.756611967 +0000 UTC m=+7.075068388" Jan 29 12:23:34.074779 systemd[1]: Created slice kubepods-besteffort-pod6d6a09e4_bd7e_4e0e_ae53_0730797826f3.slice - libcontainer container kubepods-besteffort-pod6d6a09e4_bd7e_4e0e_ae53_0730797826f3.slice. Jan 29 12:23:34.076116 kubelet[3066]: I0129 12:23:34.076080 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d6a09e4-bd7e-4e0e-ae53-0730797826f3-typha-certs\") pod \"calico-typha-858894b7b7-8lgvl\" (UID: \"6d6a09e4-bd7e-4e0e-ae53-0730797826f3\") " pod="calico-system/calico-typha-858894b7b7-8lgvl" Jan 29 12:23:34.076520 kubelet[3066]: I0129 12:23:34.076140 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d6a09e4-bd7e-4e0e-ae53-0730797826f3-tigera-ca-bundle\") pod \"calico-typha-858894b7b7-8lgvl\" (UID: \"6d6a09e4-bd7e-4e0e-ae53-0730797826f3\") " pod="calico-system/calico-typha-858894b7b7-8lgvl" Jan 29 12:23:34.076520 kubelet[3066]: I0129 12:23:34.076180 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmsv\" (UniqueName: \"kubernetes.io/projected/6d6a09e4-bd7e-4e0e-ae53-0730797826f3-kube-api-access-qbmsv\") pod \"calico-typha-858894b7b7-8lgvl\" (UID: \"6d6a09e4-bd7e-4e0e-ae53-0730797826f3\") " pod="calico-system/calico-typha-858894b7b7-8lgvl" Jan 29 12:23:34.096211 systemd[1]: Created slice kubepods-besteffort-pod2b4105bd_d2a1_41f6_b1a5_194563fe74e3.slice - libcontainer container kubepods-besteffort-pod2b4105bd_d2a1_41f6_b1a5_194563fe74e3.slice. Jan 29 12:23:34.176869 kubelet[3066]: I0129 12:23:34.176794 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-policysync\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.176869 kubelet[3066]: I0129 12:23:34.176856 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-var-run-calico\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177078 kubelet[3066]: I0129 12:23:34.176899 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-xtables-lock\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177078 kubelet[3066]: I0129 12:23:34.176973 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-cni-log-dir\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177078 kubelet[3066]: I0129 12:23:34.177033 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-flexvol-driver-host\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177393 kubelet[3066]: I0129 12:23:34.177094 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nqjv\" (UniqueName: \"kubernetes.io/projected/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-kube-api-access-2nqjv\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177393 kubelet[3066]: I0129 12:23:34.177144 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-tigera-ca-bundle\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177393 kubelet[3066]: I0129 12:23:34.177192 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-lib-modules\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177393 kubelet[3066]: I0129 12:23:34.177235 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-node-certs\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177393 kubelet[3066]: I0129 12:23:34.177278 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-var-lib-calico\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177754 kubelet[3066]: I0129 12:23:34.177319 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-cni-bin-dir\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.177754 kubelet[3066]: I0129 12:23:34.177390 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2b4105bd-d2a1-41f6-b1a5-194563fe74e3-cni-net-dir\") pod \"calico-node-hzrm9\" (UID: \"2b4105bd-d2a1-41f6-b1a5-194563fe74e3\") " pod="calico-system/calico-node-hzrm9" Jan 29 12:23:34.211280 kubelet[3066]: E0129 12:23:34.211199 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whf72" podUID="262d59b9-71cb-45de-b97d-563bbb9e2a62" Jan 29 12:23:34.278142 kubelet[3066]: I0129 12:23:34.278104 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7xc\" (UniqueName: \"kubernetes.io/projected/262d59b9-71cb-45de-b97d-563bbb9e2a62-kube-api-access-ks7xc\") pod \"csi-node-driver-whf72\" (UID: \"262d59b9-71cb-45de-b97d-563bbb9e2a62\") " pod="calico-system/csi-node-driver-whf72" Jan 29 12:23:34.278293 kubelet[3066]: I0129 12:23:34.278192 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/262d59b9-71cb-45de-b97d-563bbb9e2a62-varrun\") pod \"csi-node-driver-whf72\" (UID: \"262d59b9-71cb-45de-b97d-563bbb9e2a62\") " pod="calico-system/csi-node-driver-whf72" Jan 29 12:23:34.278293 kubelet[3066]: I0129 12:23:34.278222 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/262d59b9-71cb-45de-b97d-563bbb9e2a62-socket-dir\") pod \"csi-node-driver-whf72\" (UID: \"262d59b9-71cb-45de-b97d-563bbb9e2a62\") " pod="calico-system/csi-node-driver-whf72" Jan 29 12:23:34.278545 kubelet[3066]: I0129 12:23:34.278519 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262d59b9-71cb-45de-b97d-563bbb9e2a62-kubelet-dir\") pod \"csi-node-driver-whf72\" (UID: \"262d59b9-71cb-45de-b97d-563bbb9e2a62\") " pod="calico-system/csi-node-driver-whf72" Jan 29 12:23:34.278862 kubelet[3066]: E0129 12:23:34.278843 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.278937 kubelet[3066]: W0129 12:23:34.278863 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.278937 kubelet[3066]: E0129 12:23:34.278881 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.279116 kubelet[3066]: E0129 12:23:34.279103 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.279158 kubelet[3066]: W0129 12:23:34.279116 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.279158 kubelet[3066]: E0129 12:23:34.279131 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.279307 kubelet[3066]: E0129 12:23:34.279298 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.279348 kubelet[3066]: W0129 12:23:34.279307 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.279348 kubelet[3066]: E0129 12:23:34.279318 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.279348 kubelet[3066]: I0129 12:23:34.279339 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/262d59b9-71cb-45de-b97d-563bbb9e2a62-registration-dir\") pod \"csi-node-driver-whf72\" (UID: \"262d59b9-71cb-45de-b97d-563bbb9e2a62\") " pod="calico-system/csi-node-driver-whf72" Jan 29 12:23:34.279519 kubelet[3066]: E0129 12:23:34.279504 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.279571 kubelet[3066]: W0129 12:23:34.279523 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.279571 kubelet[3066]: E0129 12:23:34.279558 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.279832 kubelet[3066]: E0129 12:23:34.279818 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.279871 kubelet[3066]: W0129 12:23:34.279833 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.279871 kubelet[3066]: E0129 12:23:34.279845 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.280095 kubelet[3066]: E0129 12:23:34.280063 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.280095 kubelet[3066]: W0129 12:23:34.280075 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.280095 kubelet[3066]: E0129 12:23:34.280089 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.280978 kubelet[3066]: E0129 12:23:34.280936 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.280978 kubelet[3066]: W0129 12:23:34.280949 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.280978 kubelet[3066]: E0129 12:23:34.280963 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.285748 kubelet[3066]: E0129 12:23:34.285732 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.285748 kubelet[3066]: W0129 12:23:34.285746 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.285862 kubelet[3066]: E0129 12:23:34.285759 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.379469 containerd[1825]: time="2025-01-29T12:23:34.379269571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-858894b7b7-8lgvl,Uid:6d6a09e4-bd7e-4e0e-ae53-0730797826f3,Namespace:calico-system,Attempt:0,}" Jan 29 12:23:34.380804 kubelet[3066]: E0129 12:23:34.380793 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.380804 kubelet[3066]: W0129 12:23:34.380804 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.380875 kubelet[3066]: E0129 12:23:34.380815 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.380961 kubelet[3066]: E0129 12:23:34.380955 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.380982 kubelet[3066]: W0129 12:23:34.380962 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.380982 kubelet[3066]: E0129 12:23:34.380969 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381085 kubelet[3066]: E0129 12:23:34.381079 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381112 kubelet[3066]: W0129 12:23:34.381085 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381112 kubelet[3066]: E0129 12:23:34.381094 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381206 kubelet[3066]: E0129 12:23:34.381199 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381229 kubelet[3066]: W0129 12:23:34.381208 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381229 kubelet[3066]: E0129 12:23:34.381218 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381310 kubelet[3066]: E0129 12:23:34.381304 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381330 kubelet[3066]: W0129 12:23:34.381311 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381330 kubelet[3066]: E0129 12:23:34.381320 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381416 kubelet[3066]: E0129 12:23:34.381411 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381438 kubelet[3066]: W0129 12:23:34.381417 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381438 kubelet[3066]: E0129 12:23:34.381426 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381540 kubelet[3066]: E0129 12:23:34.381532 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381569 kubelet[3066]: W0129 12:23:34.381541 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381569 kubelet[3066]: E0129 12:23:34.381550 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381651 kubelet[3066]: E0129 12:23:34.381646 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381674 kubelet[3066]: W0129 12:23:34.381651 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381674 kubelet[3066]: E0129 12:23:34.381658 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381762 kubelet[3066]: E0129 12:23:34.381756 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381781 kubelet[3066]: W0129 12:23:34.381763 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381781 kubelet[3066]: E0129 12:23:34.381771 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381865 kubelet[3066]: E0129 12:23:34.381860 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.381896 kubelet[3066]: W0129 12:23:34.381865 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.381896 kubelet[3066]: E0129 12:23:34.381871 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.381983 kubelet[3066]: E0129 12:23:34.381974 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382018 kubelet[3066]: W0129 12:23:34.381984 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382018 kubelet[3066]: E0129 12:23:34.381995 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382093 kubelet[3066]: E0129 12:23:34.382087 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382093 kubelet[3066]: W0129 12:23:34.382092 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382153 kubelet[3066]: E0129 12:23:34.382101 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382211 kubelet[3066]: E0129 12:23:34.382205 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382211 kubelet[3066]: W0129 12:23:34.382210 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382267 kubelet[3066]: E0129 12:23:34.382221 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382301 kubelet[3066]: E0129 12:23:34.382294 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382336 kubelet[3066]: W0129 12:23:34.382301 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382336 kubelet[3066]: E0129 12:23:34.382326 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382387 kubelet[3066]: E0129 12:23:34.382384 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382421 kubelet[3066]: W0129 12:23:34.382389 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382421 kubelet[3066]: E0129 12:23:34.382398 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382489 kubelet[3066]: E0129 12:23:34.382483 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382489 kubelet[3066]: W0129 12:23:34.382488 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382554 kubelet[3066]: E0129 12:23:34.382497 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382622 kubelet[3066]: E0129 12:23:34.382616 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382622 kubelet[3066]: W0129 12:23:34.382621 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382685 kubelet[3066]: E0129 12:23:34.382630 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382729 kubelet[3066]: E0129 12:23:34.382723 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382729 kubelet[3066]: W0129 12:23:34.382729 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382790 kubelet[3066]: E0129 12:23:34.382737 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382836 kubelet[3066]: E0129 12:23:34.382829 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382836 kubelet[3066]: W0129 12:23:34.382835 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.382893 kubelet[3066]: E0129 12:23:34.382850 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.382939 kubelet[3066]: E0129 12:23:34.382933 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.382939 kubelet[3066]: W0129 12:23:34.382938 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.383005 kubelet[3066]: E0129 12:23:34.382947 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.383138 kubelet[3066]: E0129 12:23:34.383131 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.383138 kubelet[3066]: W0129 12:23:34.383137 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.383203 kubelet[3066]: E0129 12:23:34.383145 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.383250 kubelet[3066]: E0129 12:23:34.383244 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.383285 kubelet[3066]: W0129 12:23:34.383250 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.383285 kubelet[3066]: E0129 12:23:34.383259 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.383351 kubelet[3066]: E0129 12:23:34.383345 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.383386 kubelet[3066]: W0129 12:23:34.383350 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.383386 kubelet[3066]: E0129 12:23:34.383358 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.383447 kubelet[3066]: E0129 12:23:34.383441 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.383447 kubelet[3066]: W0129 12:23:34.383447 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.383513 kubelet[3066]: E0129 12:23:34.383453 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.383567 kubelet[3066]: E0129 12:23:34.383561 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.383567 kubelet[3066]: W0129 12:23:34.383567 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.383627 kubelet[3066]: E0129 12:23:34.383573 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.388053 kubelet[3066]: E0129 12:23:34.388043 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:34.388053 kubelet[3066]: W0129 12:23:34.388050 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:34.388117 kubelet[3066]: E0129 12:23:34.388060 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:34.398358 containerd[1825]: time="2025-01-29T12:23:34.398311763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:34.398358 containerd[1825]: time="2025-01-29T12:23:34.398345950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:34.398358 containerd[1825]: time="2025-01-29T12:23:34.398354201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:34.398493 containerd[1825]: time="2025-01-29T12:23:34.398416031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:34.398528 containerd[1825]: time="2025-01-29T12:23:34.398512493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzrm9,Uid:2b4105bd-d2a1-41f6-b1a5-194563fe74e3,Namespace:calico-system,Attempt:0,}" Jan 29 12:23:34.407326 containerd[1825]: time="2025-01-29T12:23:34.407285812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:34.407510 containerd[1825]: time="2025-01-29T12:23:34.407495843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:34.407548 containerd[1825]: time="2025-01-29T12:23:34.407509014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:34.407576 containerd[1825]: time="2025-01-29T12:23:34.407560061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:34.415749 systemd[1]: Started cri-containerd-6ca359aa80a2c4e8197dd6ce4fa64986d42b551d729ab235151d46a0557c5db1.scope - libcontainer container 6ca359aa80a2c4e8197dd6ce4fa64986d42b551d729ab235151d46a0557c5db1. Jan 29 12:23:34.417413 systemd[1]: Started cri-containerd-d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89.scope - libcontainer container d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89. Jan 29 12:23:34.427878 containerd[1825]: time="2025-01-29T12:23:34.427855666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzrm9,Uid:2b4105bd-d2a1-41f6-b1a5-194563fe74e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89\"" Jan 29 12:23:34.428658 containerd[1825]: time="2025-01-29T12:23:34.428642706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:23:34.439343 containerd[1825]: time="2025-01-29T12:23:34.439321888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-858894b7b7-8lgvl,Uid:6d6a09e4-bd7e-4e0e-ae53-0730797826f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ca359aa80a2c4e8197dd6ce4fa64986d42b551d729ab235151d46a0557c5db1\"" Jan 29 12:23:35.721210 kubelet[3066]: E0129 12:23:35.721085 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whf72" podUID="262d59b9-71cb-45de-b97d-563bbb9e2a62" Jan 29 12:23:35.985829 kubelet[3066]: E0129 12:23:35.985725 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.985829 kubelet[3066]: W0129 12:23:35.985748 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.985829 kubelet[3066]: E0129 12:23:35.985766 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.986009 kubelet[3066]: E0129 12:23:35.985985 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.986009 kubelet[3066]: W0129 12:23:35.986001 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.986117 kubelet[3066]: E0129 12:23:35.986017 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.986255 kubelet[3066]: E0129 12:23:35.986237 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.986255 kubelet[3066]: W0129 12:23:35.986249 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.986346 kubelet[3066]: E0129 12:23:35.986260 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.986457 kubelet[3066]: E0129 12:23:35.986445 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.986500 kubelet[3066]: W0129 12:23:35.986456 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.986500 kubelet[3066]: E0129 12:23:35.986467 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.986727 kubelet[3066]: E0129 12:23:35.986683 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.986727 kubelet[3066]: W0129 12:23:35.986698 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.986727 kubelet[3066]: E0129 12:23:35.986711 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.986889 kubelet[3066]: E0129 12:23:35.986879 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.986889 kubelet[3066]: W0129 12:23:35.986888 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.986973 kubelet[3066]: E0129 12:23:35.986899 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.987093 kubelet[3066]: E0129 12:23:35.987053 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.987093 kubelet[3066]: W0129 12:23:35.987063 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.987093 kubelet[3066]: E0129 12:23:35.987072 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.987270 kubelet[3066]: E0129 12:23:35.987240 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.987270 kubelet[3066]: W0129 12:23:35.987250 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.987270 kubelet[3066]: E0129 12:23:35.987261 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.987453 kubelet[3066]: E0129 12:23:35.987442 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.987501 kubelet[3066]: W0129 12:23:35.987453 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.987501 kubelet[3066]: E0129 12:23:35.987464 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.987711 kubelet[3066]: E0129 12:23:35.987666 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.987711 kubelet[3066]: W0129 12:23:35.987680 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.987711 kubelet[3066]: E0129 12:23:35.987691 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.987898 kubelet[3066]: E0129 12:23:35.987874 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.987898 kubelet[3066]: W0129 12:23:35.987884 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.987898 kubelet[3066]: E0129 12:23:35.987895 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.988172 kubelet[3066]: E0129 12:23:35.988128 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.988172 kubelet[3066]: W0129 12:23:35.988139 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.988172 kubelet[3066]: E0129 12:23:35.988150 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.988359 kubelet[3066]: E0129 12:23:35.988328 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.988359 kubelet[3066]: W0129 12:23:35.988339 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.988359 kubelet[3066]: E0129 12:23:35.988350 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.988614 kubelet[3066]: E0129 12:23:35.988596 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.988614 kubelet[3066]: W0129 12:23:35.988614 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.988731 kubelet[3066]: E0129 12:23:35.988628 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:35.988824 kubelet[3066]: E0129 12:23:35.988811 3066 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:23:35.988873 kubelet[3066]: W0129 12:23:35.988823 3066 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:23:35.988873 kubelet[3066]: E0129 12:23:35.988835 3066 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:23:36.520874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325505207.mount: Deactivated successfully. Jan 29 12:23:36.590828 containerd[1825]: time="2025-01-29T12:23:36.590779398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:36.591182 containerd[1825]: time="2025-01-29T12:23:36.591142118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 12:23:36.591517 containerd[1825]: time="2025-01-29T12:23:36.591480484Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:36.608586 containerd[1825]: time="2025-01-29T12:23:36.608568387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:36.608919 containerd[1825]: time="2025-01-29T12:23:36.608877866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.180215502s" Jan 29 12:23:36.608919 containerd[1825]: time="2025-01-29T12:23:36.608892681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:23:36.609341 containerd[1825]: time="2025-01-29T12:23:36.609302018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:23:36.609764 containerd[1825]: time="2025-01-29T12:23:36.609751999Z" level=info msg="CreateContainer within sandbox \"d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:23:36.614534 containerd[1825]: time="2025-01-29T12:23:36.614516184Z" level=info msg="CreateContainer within sandbox \"d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41\"" Jan 29 12:23:36.614849 containerd[1825]: time="2025-01-29T12:23:36.614809547Z" level=info msg="StartContainer for \"b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41\"" Jan 29 12:23:36.638847 systemd[1]: Started cri-containerd-b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41.scope - libcontainer container b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41. Jan 29 12:23:36.651078 containerd[1825]: time="2025-01-29T12:23:36.651057247Z" level=info msg="StartContainer for \"b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41\" returns successfully" Jan 29 12:23:36.656043 systemd[1]: cri-containerd-b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41.scope: Deactivated successfully. Jan 29 12:23:36.877870 containerd[1825]: time="2025-01-29T12:23:36.877830915Z" level=info msg="shim disconnected" id=b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41 namespace=k8s.io Jan 29 12:23:36.877870 containerd[1825]: time="2025-01-29T12:23:36.877865625Z" level=warning msg="cleaning up after shim disconnected" id=b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41 namespace=k8s.io Jan 29 12:23:36.877870 containerd[1825]: time="2025-01-29T12:23:36.877872007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:23:37.511332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8839eb2bfe0474fc562a8041363d9700fc50ff780ae538561cb72a7b1136a41-rootfs.mount: Deactivated successfully. Jan 29 12:23:37.721560 kubelet[3066]: E0129 12:23:37.721423 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whf72" podUID="262d59b9-71cb-45de-b97d-563bbb9e2a62" Jan 29 12:23:38.182097 update_engine[1810]: I20250129 12:23:38.182060 1810 update_attempter.cc:509] Updating boot flags... Jan 29 12:23:38.211551 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3782) Jan 29 12:23:38.239605 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3784) Jan 29 12:23:38.260548 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3784) Jan 29 12:23:38.509257 containerd[1825]: time="2025-01-29T12:23:38.509190883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:38.509573 containerd[1825]: time="2025-01-29T12:23:38.509414007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 12:23:38.509747 containerd[1825]: time="2025-01-29T12:23:38.509732676Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:38.510779 containerd[1825]: time="2025-01-29T12:23:38.510766874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:38.511168 containerd[1825]: time="2025-01-29T12:23:38.511155353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.901839163s" Jan 29 12:23:38.511192 containerd[1825]: time="2025-01-29T12:23:38.511172229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:23:38.511646 containerd[1825]: time="2025-01-29T12:23:38.511632152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:23:38.514493 containerd[1825]: time="2025-01-29T12:23:38.514475926Z" level=info msg="CreateContainer within sandbox \"6ca359aa80a2c4e8197dd6ce4fa64986d42b551d729ab235151d46a0557c5db1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:23:38.518928 containerd[1825]: time="2025-01-29T12:23:38.518886606Z" level=info msg="CreateContainer within sandbox \"6ca359aa80a2c4e8197dd6ce4fa64986d42b551d729ab235151d46a0557c5db1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"caeb3ecd90ce8198cfa4f9d5fd6cfe73f93670e93546d1304781d059b656537e\"" Jan 29 12:23:38.519131 containerd[1825]: time="2025-01-29T12:23:38.519115241Z" level=info msg="StartContainer for \"caeb3ecd90ce8198cfa4f9d5fd6cfe73f93670e93546d1304781d059b656537e\"" Jan 29 12:23:38.547709 systemd[1]: Started cri-containerd-caeb3ecd90ce8198cfa4f9d5fd6cfe73f93670e93546d1304781d059b656537e.scope - libcontainer container caeb3ecd90ce8198cfa4f9d5fd6cfe73f93670e93546d1304781d059b656537e. Jan 29 12:23:38.570913 containerd[1825]: time="2025-01-29T12:23:38.570889077Z" level=info msg="StartContainer for \"caeb3ecd90ce8198cfa4f9d5fd6cfe73f93670e93546d1304781d059b656537e\" returns successfully" Jan 29 12:23:38.781742 kubelet[3066]: I0129 12:23:38.781654 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-858894b7b7-8lgvl" podStartSLOduration=0.709909777 podStartE2EDuration="4.78163949s" podCreationTimestamp="2025-01-29 12:23:34 +0000 UTC" firstStartedPulling="2025-01-29 12:23:34.439812792 +0000 UTC m=+9.758269211" lastFinishedPulling="2025-01-29 12:23:38.511542503 +0000 UTC m=+13.829998924" observedRunningTime="2025-01-29 12:23:38.781493744 +0000 UTC m=+14.099950173" watchObservedRunningTime="2025-01-29 12:23:38.78163949 +0000 UTC m=+14.100095919" Jan 29 12:23:39.721902 kubelet[3066]: E0129 12:23:39.721780 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whf72" podUID="262d59b9-71cb-45de-b97d-563bbb9e2a62" Jan 29 12:23:39.773712 kubelet[3066]: I0129 12:23:39.773661 3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:23:41.720893 kubelet[3066]: E0129 12:23:41.720870 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whf72" podUID="262d59b9-71cb-45de-b97d-563bbb9e2a62" Jan 29 12:23:41.997041 containerd[1825]: time="2025-01-29T12:23:41.996956957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:41.997229 containerd[1825]: time="2025-01-29T12:23:41.997074394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:23:41.997461 containerd[1825]: time="2025-01-29T12:23:41.997418716Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:41.998874 containerd[1825]: time="2025-01-29T12:23:41.998833161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:41.999162 containerd[1825]: time="2025-01-29T12:23:41.999121904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.487471194s" Jan 29 12:23:41.999162 containerd[1825]: time="2025-01-29T12:23:41.999137146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:23:42.000223 containerd[1825]: time="2025-01-29T12:23:42.000179860Z" level=info msg="CreateContainer within sandbox \"d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:23:42.004544 containerd[1825]: time="2025-01-29T12:23:42.004522976Z" level=info msg="CreateContainer within sandbox \"d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f\"" Jan 29 12:23:42.004747 containerd[1825]: time="2025-01-29T12:23:42.004733500Z" level=info msg="StartContainer for \"95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f\"" Jan 29 12:23:42.035006 systemd[1]: Started cri-containerd-95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f.scope - libcontainer container 95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f. Jan 29 12:23:42.096014 containerd[1825]: time="2025-01-29T12:23:42.095977230Z" level=info msg="StartContainer for \"95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f\" returns successfully" Jan 29 12:23:42.679052 systemd[1]: cri-containerd-95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f.scope: Deactivated successfully. Jan 29 12:23:42.689627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f-rootfs.mount: Deactivated successfully. Jan 29 12:23:42.733259 kubelet[3066]: I0129 12:23:42.733158 3066 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 12:23:42.789080 systemd[1]: Created slice kubepods-besteffort-pod47e07d70_9f81_457c_b809_e3461b33a8b3.slice - libcontainer container kubepods-besteffort-pod47e07d70_9f81_457c_b809_e3461b33a8b3.slice. Jan 29 12:23:42.795451 systemd[1]: Created slice kubepods-burstable-pod2246e19b_2c9c_4027_a711_4fb712a9ea9e.slice - libcontainer container kubepods-burstable-pod2246e19b_2c9c_4027_a711_4fb712a9ea9e.slice. Jan 29 12:23:42.801360 systemd[1]: Created slice kubepods-burstable-podbe14fd9c_84ab_4c38_a4a3_407c709a8f57.slice - libcontainer container kubepods-burstable-podbe14fd9c_84ab_4c38_a4a3_407c709a8f57.slice. Jan 29 12:23:42.806001 systemd[1]: Created slice kubepods-besteffort-pode1e7c23d_d84d_4ad1_b510_8659281971e0.slice - libcontainer container kubepods-besteffort-pode1e7c23d_d84d_4ad1_b510_8659281971e0.slice. Jan 29 12:23:42.810114 systemd[1]: Created slice kubepods-besteffort-pod82e8b816_5e3d_47c1_8fd2_31578a33896d.slice - libcontainer container kubepods-besteffort-pod82e8b816_5e3d_47c1_8fd2_31578a33896d.slice. Jan 29 12:23:42.942444 kubelet[3066]: I0129 12:23:42.942192 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhvs\" (UniqueName: \"kubernetes.io/projected/be14fd9c-84ab-4c38-a4a3-407c709a8f57-kube-api-access-mnhvs\") pod \"coredns-6f6b679f8f-lq4x8\" (UID: \"be14fd9c-84ab-4c38-a4a3-407c709a8f57\") " pod="kube-system/coredns-6f6b679f8f-lq4x8" Jan 29 12:23:42.942444 kubelet[3066]: I0129 12:23:42.942304 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2246e19b-2c9c-4027-a711-4fb712a9ea9e-config-volume\") pod \"coredns-6f6b679f8f-hhhb4\" (UID: \"2246e19b-2c9c-4027-a711-4fb712a9ea9e\") " pod="kube-system/coredns-6f6b679f8f-hhhb4" Jan 29 12:23:42.942444 kubelet[3066]: I0129 12:23:42.942364 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx7tl\" (UniqueName: \"kubernetes.io/projected/2246e19b-2c9c-4027-a711-4fb712a9ea9e-kube-api-access-bx7tl\") pod \"coredns-6f6b679f8f-hhhb4\" (UID: \"2246e19b-2c9c-4027-a711-4fb712a9ea9e\") " pod="kube-system/coredns-6f6b679f8f-hhhb4" Jan 29 12:23:42.942444 kubelet[3066]: I0129 12:23:42.942413 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be14fd9c-84ab-4c38-a4a3-407c709a8f57-config-volume\") pod \"coredns-6f6b679f8f-lq4x8\" (UID: \"be14fd9c-84ab-4c38-a4a3-407c709a8f57\") " pod="kube-system/coredns-6f6b679f8f-lq4x8" Jan 29 12:23:42.943304 kubelet[3066]: I0129 12:23:42.942463 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22t6\" (UniqueName: \"kubernetes.io/projected/e1e7c23d-d84d-4ad1-b510-8659281971e0-kube-api-access-z22t6\") pod \"calico-apiserver-c5d654d57-hvmzk\" (UID: \"e1e7c23d-d84d-4ad1-b510-8659281971e0\") " pod="calico-apiserver/calico-apiserver-c5d654d57-hvmzk" Jan 29 12:23:42.943304 kubelet[3066]: I0129 12:23:42.942606 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/82e8b816-5e3d-47c1-8fd2-31578a33896d-calico-apiserver-certs\") pod \"calico-apiserver-c5d654d57-6z9pz\" (UID: \"82e8b816-5e3d-47c1-8fd2-31578a33896d\") " pod="calico-apiserver/calico-apiserver-c5d654d57-6z9pz" Jan 29 12:23:42.943304 kubelet[3066]: I0129 12:23:42.942702 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1e7c23d-d84d-4ad1-b510-8659281971e0-calico-apiserver-certs\") pod \"calico-apiserver-c5d654d57-hvmzk\" (UID: \"e1e7c23d-d84d-4ad1-b510-8659281971e0\") " pod="calico-apiserver/calico-apiserver-c5d654d57-hvmzk" Jan 29 12:23:42.943304 kubelet[3066]: I0129 12:23:42.942763 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmx8h\" (UniqueName: \"kubernetes.io/projected/47e07d70-9f81-457c-b809-e3461b33a8b3-kube-api-access-kmx8h\") pod \"calico-kube-controllers-bb5789b9c-zwg2m\" (UID: \"47e07d70-9f81-457c-b809-e3461b33a8b3\") " pod="calico-system/calico-kube-controllers-bb5789b9c-zwg2m" Jan 29 12:23:42.943304 kubelet[3066]: I0129 12:23:42.942818 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr7ww\" (UniqueName: \"kubernetes.io/projected/82e8b816-5e3d-47c1-8fd2-31578a33896d-kube-api-access-dr7ww\") pod \"calico-apiserver-c5d654d57-6z9pz\" (UID: \"82e8b816-5e3d-47c1-8fd2-31578a33896d\") " pod="calico-apiserver/calico-apiserver-c5d654d57-6z9pz" Jan 29 12:23:42.943913 kubelet[3066]: I0129 12:23:42.942867 3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47e07d70-9f81-457c-b809-e3461b33a8b3-tigera-ca-bundle\") pod \"calico-kube-controllers-bb5789b9c-zwg2m\" (UID: \"47e07d70-9f81-457c-b809-e3461b33a8b3\") " pod="calico-system/calico-kube-controllers-bb5789b9c-zwg2m" Jan 29 12:23:43.105037 containerd[1825]: time="2025-01-29T12:23:43.104997637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq4x8,Uid:be14fd9c-84ab-4c38-a4a3-407c709a8f57,Namespace:kube-system,Attempt:0,}" Jan 29 12:23:43.108606 containerd[1825]: time="2025-01-29T12:23:43.108547580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-hvmzk,Uid:e1e7c23d-d84d-4ad1-b510-8659281971e0,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:23:43.113169 containerd[1825]: time="2025-01-29T12:23:43.113106385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-6z9pz,Uid:82e8b816-5e3d-47c1-8fd2-31578a33896d,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:23:43.359862 containerd[1825]: time="2025-01-29T12:23:43.359819938Z" level=info msg="shim disconnected" id=95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f namespace=k8s.io Jan 29 12:23:43.359862 containerd[1825]: time="2025-01-29T12:23:43.359858189Z" level=warning msg="cleaning up after shim disconnected" id=95eb28b4e3b62f201f58aa4f3e073dd73c023a4d34f49a31ee762fe7fe035e7f namespace=k8s.io Jan 29 12:23:43.359862 containerd[1825]: time="2025-01-29T12:23:43.359863941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:23:43.377453 containerd[1825]: time="2025-01-29T12:23:43.377424555Z" level=error msg="Failed to destroy network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377582 containerd[1825]: time="2025-01-29T12:23:43.377484768Z" level=error msg="Failed to destroy network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377582 containerd[1825]: time="2025-01-29T12:23:43.377505618Z" level=error msg="Failed to destroy network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377636 containerd[1825]: time="2025-01-29T12:23:43.377619637Z" level=error msg="encountered an error cleaning up failed sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377657 containerd[1825]: time="2025-01-29T12:23:43.377637218Z" level=error msg="encountered an error cleaning up failed sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377657 containerd[1825]: time="2025-01-29T12:23:43.377649257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq4x8,Uid:be14fd9c-84ab-4c38-a4a3-407c709a8f57,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377695 containerd[1825]: time="2025-01-29T12:23:43.377661361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-6z9pz,Uid:82e8b816-5e3d-47c1-8fd2-31578a33896d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377695 containerd[1825]: time="2025-01-29T12:23:43.377679819Z" level=error msg="encountered an error cleaning up failed sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377827 containerd[1825]: time="2025-01-29T12:23:43.377703037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-hvmzk,Uid:e1e7c23d-d84d-4ad1-b510-8659281971e0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377907 kubelet[3066]: E0129 12:23:43.377860 3066 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.377936 kubelet[3066]: E0129 12:23:43.377921 3066 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lq4x8" Jan 29 12:23:43.377961 kubelet[3066]: E0129 12:23:43.377933 3066 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lq4x8" Jan 29 12:23:43.377961 kubelet[3066]: E0129 12:23:43.377864 3066 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.378025 kubelet[3066]: E0129 12:23:43.377869 3066 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.378025 kubelet[3066]: E0129 12:23:43.377967 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lq4x8_kube-system(be14fd9c-84ab-4c38-a4a3-407c709a8f57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lq4x8_kube-system(be14fd9c-84ab-4c38-a4a3-407c709a8f57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lq4x8" podUID="be14fd9c-84ab-4c38-a4a3-407c709a8f57" Jan 29 12:23:43.378025 kubelet[3066]: E0129 12:23:43.377972 3066 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5d654d57-hvmzk" Jan 29 12:23:43.378128 kubelet[3066]: E0129 12:23:43.377985 3066 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5d654d57-6z9pz" Jan 29 12:23:43.378128 kubelet[3066]: E0129 12:23:43.377991 3066 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5d654d57-hvmzk" Jan 29 12:23:43.378128 kubelet[3066]: E0129 12:23:43.378001 3066 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c5d654d57-6z9pz" Jan 29 12:23:43.378196 kubelet[3066]: E0129 12:23:43.378005 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5d654d57-hvmzk_calico-apiserver(e1e7c23d-d84d-4ad1-b510-8659281971e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5d654d57-hvmzk_calico-apiserver(e1e7c23d-d84d-4ad1-b510-8659281971e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5d654d57-hvmzk" podUID="e1e7c23d-d84d-4ad1-b510-8659281971e0" Jan 29 12:23:43.378196 kubelet[3066]: E0129 12:23:43.378028 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c5d654d57-6z9pz_calico-apiserver(82e8b816-5e3d-47c1-8fd2-31578a33896d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c5d654d57-6z9pz_calico-apiserver(82e8b816-5e3d-47c1-8fd2-31578a33896d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5d654d57-6z9pz" podUID="82e8b816-5e3d-47c1-8fd2-31578a33896d" Jan 29 12:23:43.393289 containerd[1825]: time="2025-01-29T12:23:43.393237535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bb5789b9c-zwg2m,Uid:47e07d70-9f81-457c-b809-e3461b33a8b3,Namespace:calico-system,Attempt:0,}" Jan 29 12:23:43.398782 containerd[1825]: time="2025-01-29T12:23:43.398751929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhb4,Uid:2246e19b-2c9c-4027-a711-4fb712a9ea9e,Namespace:kube-system,Attempt:0,}" Jan 29 12:23:43.422776 containerd[1825]: time="2025-01-29T12:23:43.422747994Z" level=error msg="Failed to destroy network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.422990 containerd[1825]: time="2025-01-29T12:23:43.422951496Z" level=error msg="encountered an error cleaning up failed sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.422990 containerd[1825]: time="2025-01-29T12:23:43.422979503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bb5789b9c-zwg2m,Uid:47e07d70-9f81-457c-b809-e3461b33a8b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.423126 kubelet[3066]: E0129 12:23:43.423105 3066 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.423166 kubelet[3066]: E0129 12:23:43.423140 3066 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bb5789b9c-zwg2m" Jan 29 12:23:43.423166 kubelet[3066]: E0129 12:23:43.423152 3066 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bb5789b9c-zwg2m" Jan 29 12:23:43.423210 kubelet[3066]: E0129 12:23:43.423176 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bb5789b9c-zwg2m_calico-system(47e07d70-9f81-457c-b809-e3461b33a8b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bb5789b9c-zwg2m_calico-system(47e07d70-9f81-457c-b809-e3461b33a8b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bb5789b9c-zwg2m" podUID="47e07d70-9f81-457c-b809-e3461b33a8b3" Jan 29 12:23:43.427018 containerd[1825]: time="2025-01-29T12:23:43.426999795Z" level=error msg="Failed to destroy network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.427172 containerd[1825]: time="2025-01-29T12:23:43.427160546Z" level=error msg="encountered an error cleaning up failed sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.427199 containerd[1825]: time="2025-01-29T12:23:43.427185007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhb4,Uid:2246e19b-2c9c-4027-a711-4fb712a9ea9e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.427323 kubelet[3066]: E0129 12:23:43.427303 3066 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.427359 kubelet[3066]: E0129 12:23:43.427341 3066 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hhhb4" Jan 29 12:23:43.427359 kubelet[3066]: E0129 12:23:43.427353 3066 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hhhb4" Jan 29 12:23:43.427398 kubelet[3066]: E0129 12:23:43.427376 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hhhb4_kube-system(2246e19b-2c9c-4027-a711-4fb712a9ea9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hhhb4_kube-system(2246e19b-2c9c-4027-a711-4fb712a9ea9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hhhb4" podUID="2246e19b-2c9c-4027-a711-4fb712a9ea9e" Jan 29 12:23:43.736213 systemd[1]: Created slice kubepods-besteffort-pod262d59b9_71cb_45de_b97d_563bbb9e2a62.slice - libcontainer container kubepods-besteffort-pod262d59b9_71cb_45de_b97d_563bbb9e2a62.slice. Jan 29 12:23:43.742290 containerd[1825]: time="2025-01-29T12:23:43.742178008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whf72,Uid:262d59b9-71cb-45de-b97d-563bbb9e2a62,Namespace:calico-system,Attempt:0,}" Jan 29 12:23:43.772633 containerd[1825]: time="2025-01-29T12:23:43.772580578Z" level=error msg="Failed to destroy network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.772782 containerd[1825]: time="2025-01-29T12:23:43.772769054Z" level=error msg="encountered an error cleaning up failed sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.772819 containerd[1825]: time="2025-01-29T12:23:43.772804818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whf72,Uid:262d59b9-71cb-45de-b97d-563bbb9e2a62,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.772996 kubelet[3066]: E0129 12:23:43.772948 3066 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.772996 kubelet[3066]: E0129 12:23:43.772986 3066 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whf72" Jan 29 12:23:43.773210 kubelet[3066]: E0129 12:23:43.773000 3066 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whf72" Jan 29 12:23:43.773210 kubelet[3066]: E0129 12:23:43.773029 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-whf72_calico-system(262d59b9-71cb-45de-b97d-563bbb9e2a62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-whf72_calico-system(262d59b9-71cb-45de-b97d-563bbb9e2a62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whf72" podUID="262d59b9-71cb-45de-b97d-563bbb9e2a62" Jan 29 12:23:43.783895 kubelet[3066]: I0129 12:23:43.783841 3066 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:23:43.784227 containerd[1825]: time="2025-01-29T12:23:43.784213458Z" level=info msg="StopPodSandbox for \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\"" Jan 29 12:23:43.784289 kubelet[3066]: I0129 12:23:43.784280 3066 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:23:43.784370 containerd[1825]: time="2025-01-29T12:23:43.784358883Z" level=info msg="Ensure that sandbox b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc in task-service has been cleanup successfully" Jan 29 12:23:43.784561 containerd[1825]: time="2025-01-29T12:23:43.784545533Z" level=info msg="StopPodSandbox for \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\"" Jan 29 12:23:43.784665 containerd[1825]: time="2025-01-29T12:23:43.784650478Z" level=info msg="Ensure that sandbox a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb in task-service has been cleanup successfully" Jan 29 12:23:43.784846 kubelet[3066]: I0129 12:23:43.784833 3066 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:23:43.785092 containerd[1825]: time="2025-01-29T12:23:43.785078384Z" level=info msg="StopPodSandbox for \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\"" Jan 29 12:23:43.785188 containerd[1825]: time="2025-01-29T12:23:43.785172588Z" level=info msg="Ensure that sandbox 18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843 in task-service has been cleanup successfully" Jan 29 12:23:43.786547 kubelet[3066]: I0129 12:23:43.786519 3066 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:23:43.786672 containerd[1825]: time="2025-01-29T12:23:43.786566532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:23:43.786876 containerd[1825]: time="2025-01-29T12:23:43.786860466Z" level=info msg="StopPodSandbox for \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\"" Jan 29 12:23:43.787148 containerd[1825]: time="2025-01-29T12:23:43.786993615Z" level=info msg="Ensure that sandbox 56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba in task-service has been cleanup successfully" Jan 29 12:23:43.787257 kubelet[3066]: I0129 12:23:43.787239 3066 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:23:43.787703 containerd[1825]: time="2025-01-29T12:23:43.787681668Z" level=info msg="StopPodSandbox for \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\"" Jan 29 12:23:43.787849 containerd[1825]: time="2025-01-29T12:23:43.787832409Z" level=info msg="Ensure that sandbox 652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e in task-service has been cleanup successfully" Jan 29 12:23:43.787985 kubelet[3066]: I0129 12:23:43.787973 3066 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:23:43.788354 containerd[1825]: time="2025-01-29T12:23:43.788325739Z" level=info msg="StopPodSandbox for \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\"" Jan 29 12:23:43.788549 containerd[1825]: time="2025-01-29T12:23:43.788522756Z" level=info msg="Ensure that sandbox bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8 in task-service has been cleanup successfully" Jan 29 12:23:43.805045 containerd[1825]: time="2025-01-29T12:23:43.805012958Z" level=error msg="StopPodSandbox for \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\" failed" error="failed to destroy network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.805209 kubelet[3066]: E0129 12:23:43.805183 3066 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:23:43.805269 kubelet[3066]: E0129 12:23:43.805227 3066 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc"} Jan 29 12:23:43.805312 kubelet[3066]: E0129 12:23:43.805277 3066 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"262d59b9-71cb-45de-b97d-563bbb9e2a62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:23:43.805312 kubelet[3066]: E0129 12:23:43.805296 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"262d59b9-71cb-45de-b97d-563bbb9e2a62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whf72" podUID="262d59b9-71cb-45de-b97d-563bbb9e2a62" Jan 29 12:23:43.805459 containerd[1825]: time="2025-01-29T12:23:43.805441447Z" level=error msg="StopPodSandbox for \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\" failed" error="failed to destroy network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.805518 containerd[1825]: time="2025-01-29T12:23:43.805500578Z" level=error msg="StopPodSandbox for \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\" failed" error="failed to destroy network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.805560 kubelet[3066]: E0129 12:23:43.805534 3066 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:23:43.805587 kubelet[3066]: E0129 12:23:43.805568 3066 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb"} Jan 29 12:23:43.805609 kubelet[3066]: E0129 12:23:43.805588 3066 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1e7c23d-d84d-4ad1-b510-8659281971e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:23:43.805609 kubelet[3066]: E0129 12:23:43.805593 3066 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:23:43.805676 kubelet[3066]: E0129 12:23:43.805603 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1e7c23d-d84d-4ad1-b510-8659281971e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5d654d57-hvmzk" podUID="e1e7c23d-d84d-4ad1-b510-8659281971e0" Jan 29 12:23:43.805676 kubelet[3066]: E0129 12:23:43.805618 3066 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843"} Jan 29 12:23:43.805676 kubelet[3066]: E0129 12:23:43.805642 3066 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be14fd9c-84ab-4c38-a4a3-407c709a8f57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:23:43.805676 kubelet[3066]: E0129 12:23:43.805656 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be14fd9c-84ab-4c38-a4a3-407c709a8f57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lq4x8" podUID="be14fd9c-84ab-4c38-a4a3-407c709a8f57" Jan 29 12:23:43.806867 containerd[1825]: time="2025-01-29T12:23:43.806844993Z" level=error msg="StopPodSandbox for \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\" failed" error="failed to destroy network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.806946 kubelet[3066]: E0129 12:23:43.806929 3066 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:23:43.806978 kubelet[3066]: E0129 12:23:43.806952 3066 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba"} Jan 29 12:23:43.806978 kubelet[3066]: E0129 12:23:43.806972 3066 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2246e19b-2c9c-4027-a711-4fb712a9ea9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:23:43.807048 kubelet[3066]: E0129 12:23:43.806987 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2246e19b-2c9c-4027-a711-4fb712a9ea9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hhhb4" podUID="2246e19b-2c9c-4027-a711-4fb712a9ea9e" Jan 29 12:23:43.809110 containerd[1825]: time="2025-01-29T12:23:43.809088229Z" level=error msg="StopPodSandbox for \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\" failed" error="failed to destroy network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.809248 kubelet[3066]: E0129 12:23:43.809206 3066 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:23:43.809248 kubelet[3066]: E0129 12:23:43.809228 3066 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e"} Jan 29 12:23:43.809304 kubelet[3066]: E0129 12:23:43.809246 3066 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47e07d70-9f81-457c-b809-e3461b33a8b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:23:43.809304 kubelet[3066]: E0129 12:23:43.809261 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47e07d70-9f81-457c-b809-e3461b33a8b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bb5789b9c-zwg2m" podUID="47e07d70-9f81-457c-b809-e3461b33a8b3" Jan 29 12:23:43.810274 containerd[1825]: time="2025-01-29T12:23:43.810219690Z" level=error msg="StopPodSandbox for \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\" failed" error="failed to destroy network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:23:43.810344 kubelet[3066]: E0129 12:23:43.810329 3066 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:23:43.810381 kubelet[3066]: E0129 12:23:43.810349 3066 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8"} Jan 29 12:23:43.810381 kubelet[3066]: E0129 12:23:43.810368 3066 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82e8b816-5e3d-47c1-8fd2-31578a33896d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:23:43.810450 kubelet[3066]: E0129 12:23:43.810382 3066 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82e8b816-5e3d-47c1-8fd2-31578a33896d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c5d654d57-6z9pz" podUID="82e8b816-5e3d-47c1-8fd2-31578a33896d" Jan 29 12:23:44.063442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843-shm.mount: Deactivated successfully. Jan 29 12:23:48.970503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519001226.mount: Deactivated successfully. Jan 29 12:23:48.992418 containerd[1825]: time="2025-01-29T12:23:48.992397152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:48.992596 containerd[1825]: time="2025-01-29T12:23:48.992543815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:23:48.993017 containerd[1825]: time="2025-01-29T12:23:48.993004000Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:48.993862 containerd[1825]: time="2025-01-29T12:23:48.993850823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:48.994300 containerd[1825]: time="2025-01-29T12:23:48.994255751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.207663288s" Jan 29 12:23:48.994300 containerd[1825]: time="2025-01-29T12:23:48.994277809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:23:48.997603 containerd[1825]: time="2025-01-29T12:23:48.997586294Z" level=info msg="CreateContainer within sandbox \"d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:23:49.002673 containerd[1825]: time="2025-01-29T12:23:49.002657673Z" level=info msg="CreateContainer within sandbox \"d179814cfb02b1e14880847e4eae5b907eed727100d765259195aa7090facd89\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f6fa18a30e55cbc3d6080c0a4fa704f41cccfc8ce433d7f887f37baa9190d659\"" Jan 29 12:23:49.002943 containerd[1825]: time="2025-01-29T12:23:49.002928749Z" level=info msg="StartContainer for \"f6fa18a30e55cbc3d6080c0a4fa704f41cccfc8ce433d7f887f37baa9190d659\"" Jan 29 12:23:49.022723 systemd[1]: Started cri-containerd-f6fa18a30e55cbc3d6080c0a4fa704f41cccfc8ce433d7f887f37baa9190d659.scope - libcontainer container f6fa18a30e55cbc3d6080c0a4fa704f41cccfc8ce433d7f887f37baa9190d659. Jan 29 12:23:49.037118 containerd[1825]: time="2025-01-29T12:23:49.037095384Z" level=info msg="StartContainer for \"f6fa18a30e55cbc3d6080c0a4fa704f41cccfc8ce433d7f887f37baa9190d659\" returns successfully" Jan 29 12:23:49.095782 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:23:49.095840 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:23:49.823397 kubelet[3066]: I0129 12:23:49.823344 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hzrm9" podStartSLOduration=1.257160678 podStartE2EDuration="15.823330233s" podCreationTimestamp="2025-01-29 12:23:34 +0000 UTC" firstStartedPulling="2025-01-29 12:23:34.428478079 +0000 UTC m=+9.746934506" lastFinishedPulling="2025-01-29 12:23:48.994647641 +0000 UTC m=+24.313104061" observedRunningTime="2025-01-29 12:23:49.823244587 +0000 UTC m=+25.141701008" watchObservedRunningTime="2025-01-29 12:23:49.823330233 +0000 UTC m=+25.141786654" Jan 29 12:23:50.602950 kubelet[3066]: I0129 12:23:50.602830 3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:23:51.367607 kernel: bpftool[4761]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:23:51.512906 systemd-networkd[1609]: vxlan.calico: Link UP Jan 29 12:23:51.512910 systemd-networkd[1609]: vxlan.calico: Gained carrier Jan 29 12:23:53.296728 systemd-networkd[1609]: vxlan.calico: Gained IPv6LL Jan 29 12:23:54.723427 containerd[1825]: time="2025-01-29T12:23:54.723334439Z" level=info msg="StopPodSandbox for \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\"" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.762 [INFO][4884] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.762 [INFO][4884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" iface="eth0" netns="/var/run/netns/cni-95e37f5b-2152-5999-a6d1-32e04ea62330" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.762 [INFO][4884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" iface="eth0" netns="/var/run/netns/cni-95e37f5b-2152-5999-a6d1-32e04ea62330" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.762 [INFO][4884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" iface="eth0" netns="/var/run/netns/cni-95e37f5b-2152-5999-a6d1-32e04ea62330" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.762 [INFO][4884] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.762 [INFO][4884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.772 [INFO][4899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.772 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.772 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.776 [WARNING][4899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.776 [INFO][4899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.777 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:54.779477 containerd[1825]: 2025-01-29 12:23:54.778 [INFO][4884] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:23:54.779768 containerd[1825]: time="2025-01-29T12:23:54.779548240Z" level=info msg="TearDown network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\" successfully" Jan 29 12:23:54.779768 containerd[1825]: time="2025-01-29T12:23:54.779567756Z" level=info msg="StopPodSandbox for \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\" returns successfully" Jan 29 12:23:54.780027 containerd[1825]: time="2025-01-29T12:23:54.780014901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-hvmzk,Uid:e1e7c23d-d84d-4ad1-b510-8659281971e0,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:23:54.781046 systemd[1]: run-netns-cni\x2d95e37f5b\x2d2152\x2d5999\x2da6d1\x2d32e04ea62330.mount: Deactivated successfully. Jan 29 12:23:54.843281 systemd-networkd[1609]: calia11af41b6e2: Link UP Jan 29 12:23:54.843404 systemd-networkd[1609]: calia11af41b6e2: Gained carrier Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.803 [INFO][4911] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0 calico-apiserver-c5d654d57- calico-apiserver e1e7c23d-d84d-4ad1-b510-8659281971e0 724 0 2025-01-29 12:23:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5d654d57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-eb3371d08a calico-apiserver-c5d654d57-hvmzk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia11af41b6e2 [] []}} ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.803 [INFO][4911] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.819 [INFO][4934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" HandleID="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.824 [INFO][4934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" HandleID="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000432160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-eb3371d08a", "pod":"calico-apiserver-c5d654d57-hvmzk", "timestamp":"2025-01-29 12:23:54.819252334 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eb3371d08a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.824 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.824 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.824 [INFO][4934] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eb3371d08a' Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.826 [INFO][4934] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.828 [INFO][4934] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.832 [INFO][4934] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.833 [INFO][4934] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.835 [INFO][4934] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.835 [INFO][4934] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.836 [INFO][4934] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4 Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.838 [INFO][4934] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.841 [INFO][4934] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.193/26] block=192.168.53.192/26 handle="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.841 [INFO][4934] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.193/26] handle="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.841 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:54.848618 containerd[1825]: 2025-01-29 12:23:54.841 [INFO][4934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.193/26] IPv6=[] ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" HandleID="k8s-pod-network.20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.849108 containerd[1825]: 2025-01-29 12:23:54.842 [INFO][4911] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1e7c23d-d84d-4ad1-b510-8659281971e0", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"", Pod:"calico-apiserver-c5d654d57-hvmzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia11af41b6e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:54.849108 containerd[1825]: 2025-01-29 12:23:54.842 [INFO][4911] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.193/32] ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.849108 containerd[1825]: 2025-01-29 12:23:54.842 [INFO][4911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia11af41b6e2 ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.849108 containerd[1825]: 2025-01-29 12:23:54.843 [INFO][4911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.849108 containerd[1825]: 2025-01-29 12:23:54.843 [INFO][4911] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1e7c23d-d84d-4ad1-b510-8659281971e0", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4", Pod:"calico-apiserver-c5d654d57-hvmzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia11af41b6e2", MAC:"e2:ce:19:79:05:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:54.849108 containerd[1825]: 2025-01-29 12:23:54.847 [INFO][4911] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-hvmzk" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:23:54.857565 containerd[1825]: time="2025-01-29T12:23:54.857515569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:54.857565 containerd[1825]: time="2025-01-29T12:23:54.857550711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:54.857565 containerd[1825]: time="2025-01-29T12:23:54.857558259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:54.857683 containerd[1825]: time="2025-01-29T12:23:54.857600279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:54.882828 systemd[1]: Started cri-containerd-20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4.scope - libcontainer container 20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4. Jan 29 12:23:54.913671 containerd[1825]: time="2025-01-29T12:23:54.913644502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-hvmzk,Uid:e1e7c23d-d84d-4ad1-b510-8659281971e0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4\"" Jan 29 12:23:54.914560 containerd[1825]: time="2025-01-29T12:23:54.914534448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:23:55.723067 containerd[1825]: time="2025-01-29T12:23:55.722973511Z" level=info msg="StopPodSandbox for \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\"" Jan 29 12:23:55.723382 containerd[1825]: time="2025-01-29T12:23:55.723023167Z" level=info msg="StopPodSandbox for \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\"" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5037] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" iface="eth0" netns="/var/run/netns/cni-5780976d-673d-2d1b-35d8-b47275b4d27a" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" iface="eth0" netns="/var/run/netns/cni-5780976d-673d-2d1b-35d8-b47275b4d27a" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" iface="eth0" netns="/var/run/netns/cni-5780976d-673d-2d1b-35d8-b47275b4d27a" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5037] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.760 [INFO][5068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.760 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.760 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.763 [WARNING][5068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.763 [INFO][5068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.765 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:55.766421 containerd[1825]: 2025-01-29 12:23:55.765 [INFO][5037] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:23:55.767180 containerd[1825]: time="2025-01-29T12:23:55.766496102Z" level=info msg="TearDown network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\" successfully" Jan 29 12:23:55.767180 containerd[1825]: time="2025-01-29T12:23:55.766512886Z" level=info msg="StopPodSandbox for \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\" returns successfully" Jan 29 12:23:55.767219 containerd[1825]: time="2025-01-29T12:23:55.767183630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhb4,Uid:2246e19b-2c9c-4027-a711-4fb712a9ea9e,Namespace:kube-system,Attempt:1,}" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5038] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" iface="eth0" netns="/var/run/netns/cni-3c1172a3-1a8d-30f8-6022-b2f3db768bc1" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5038] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" iface="eth0" netns="/var/run/netns/cni-3c1172a3-1a8d-30f8-6022-b2f3db768bc1" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5038] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" iface="eth0" netns="/var/run/netns/cni-3c1172a3-1a8d-30f8-6022-b2f3db768bc1" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.749 [INFO][5038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.750 [INFO][5038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.760 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.760 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.765 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.769 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.769 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.770 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:55.771512 containerd[1825]: 2025-01-29 12:23:55.770 [INFO][5038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:23:55.771811 containerd[1825]: time="2025-01-29T12:23:55.771599380Z" level=info msg="TearDown network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\" successfully" Jan 29 12:23:55.771811 containerd[1825]: time="2025-01-29T12:23:55.771615814Z" level=info msg="StopPodSandbox for \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\" returns successfully" Jan 29 12:23:55.772016 containerd[1825]: time="2025-01-29T12:23:55.771981895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whf72,Uid:262d59b9-71cb-45de-b97d-563bbb9e2a62,Namespace:calico-system,Attempt:1,}" Jan 29 12:23:55.783184 systemd[1]: run-netns-cni\x2d3c1172a3\x2d1a8d\x2d30f8\x2d6022\x2db2f3db768bc1.mount: Deactivated successfully. Jan 29 12:23:55.783272 systemd[1]: run-netns-cni\x2d5780976d\x2d673d\x2d2d1b\x2d35d8\x2db47275b4d27a.mount: Deactivated successfully. Jan 29 12:23:55.825760 systemd-networkd[1609]: caliadc0e5b0eae: Link UP Jan 29 12:23:55.825873 systemd-networkd[1609]: caliadc0e5b0eae: Gained carrier Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.790 [INFO][5099] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0 coredns-6f6b679f8f- kube-system 2246e19b-2c9c-4027-a711-4fb712a9ea9e 735 0 2025-01-29 12:23:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-eb3371d08a coredns-6f6b679f8f-hhhb4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliadc0e5b0eae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.790 [INFO][5099] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.806 [INFO][5146] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" HandleID="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.810 [INFO][5146] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" HandleID="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-eb3371d08a", "pod":"coredns-6f6b679f8f-hhhb4", "timestamp":"2025-01-29 12:23:55.806162267 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eb3371d08a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.810 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.810 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.810 [INFO][5146] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eb3371d08a' Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.811 [INFO][5146] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.813 [INFO][5146] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.815 [INFO][5146] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.816 [INFO][5146] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.818 [INFO][5146] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.818 [INFO][5146] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.819 [INFO][5146] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85 Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.821 [INFO][5146] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.824 [INFO][5146] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.194/26] block=192.168.53.192/26 handle="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.824 [INFO][5146] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.194/26] handle="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.824 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:55.830596 containerd[1825]: 2025-01-29 12:23:55.824 [INFO][5146] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.194/26] IPv6=[] ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" HandleID="k8s-pod-network.2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.831015 containerd[1825]: 2025-01-29 12:23:55.824 [INFO][5099] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2246e19b-2c9c-4027-a711-4fb712a9ea9e", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"", Pod:"coredns-6f6b679f8f-hhhb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadc0e5b0eae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:55.831015 containerd[1825]: 2025-01-29 12:23:55.825 [INFO][5099] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.194/32] ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.831015 containerd[1825]: 2025-01-29 12:23:55.825 [INFO][5099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadc0e5b0eae ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.831015 containerd[1825]: 2025-01-29 12:23:55.825 [INFO][5099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.831015 containerd[1825]: 2025-01-29 12:23:55.825 [INFO][5099] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2246e19b-2c9c-4027-a711-4fb712a9ea9e", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85", Pod:"coredns-6f6b679f8f-hhhb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadc0e5b0eae", MAC:"ee:38:ea:34:21:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:55.831015 containerd[1825]: 2025-01-29 12:23:55.829 [INFO][5099] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85" Namespace="kube-system" Pod="coredns-6f6b679f8f-hhhb4" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:23:55.839967 containerd[1825]: time="2025-01-29T12:23:55.839733188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:55.839967 containerd[1825]: time="2025-01-29T12:23:55.839955911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:55.839967 containerd[1825]: time="2025-01-29T12:23:55.839963732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:55.840101 containerd[1825]: time="2025-01-29T12:23:55.840003251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:55.866827 systemd[1]: Started cri-containerd-2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85.scope - libcontainer container 2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85. Jan 29 12:23:55.888521 containerd[1825]: time="2025-01-29T12:23:55.888498778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hhhb4,Uid:2246e19b-2c9c-4027-a711-4fb712a9ea9e,Namespace:kube-system,Attempt:1,} returns sandbox id \"2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85\"" Jan 29 12:23:55.889635 containerd[1825]: time="2025-01-29T12:23:55.889622401Z" level=info msg="CreateContainer within sandbox \"2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:23:55.893802 containerd[1825]: time="2025-01-29T12:23:55.893758311Z" level=info msg="CreateContainer within sandbox \"2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"297a5b9558ee65557a876aa5f99fdb2f2fc4343a650acad68b67ecafd3ca1b4a\"" Jan 29 12:23:55.894059 containerd[1825]: time="2025-01-29T12:23:55.893999996Z" level=info msg="StartContainer for \"297a5b9558ee65557a876aa5f99fdb2f2fc4343a650acad68b67ecafd3ca1b4a\"" Jan 29 12:23:55.903867 systemd[1]: Started cri-containerd-297a5b9558ee65557a876aa5f99fdb2f2fc4343a650acad68b67ecafd3ca1b4a.scope - libcontainer container 297a5b9558ee65557a876aa5f99fdb2f2fc4343a650acad68b67ecafd3ca1b4a. Jan 29 12:23:55.915060 containerd[1825]: time="2025-01-29T12:23:55.915035565Z" level=info msg="StartContainer for \"297a5b9558ee65557a876aa5f99fdb2f2fc4343a650acad68b67ecafd3ca1b4a\" returns successfully" Jan 29 12:23:55.929496 systemd-networkd[1609]: cali612a3150fe1: Link UP Jan 29 12:23:55.929625 systemd-networkd[1609]: cali612a3150fe1: Gained carrier Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.792 [INFO][5109] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0 csi-node-driver- calico-system 262d59b9-71cb-45de-b97d-563bbb9e2a62 736 0 2025-01-29 12:23:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-eb3371d08a csi-node-driver-whf72 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali612a3150fe1 [] []}} ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.792 [INFO][5109] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.807 [INFO][5152] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" HandleID="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.811 [INFO][5152] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" HandleID="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-eb3371d08a", "pod":"csi-node-driver-whf72", "timestamp":"2025-01-29 12:23:55.80732741 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eb3371d08a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.811 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.824 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.824 [INFO][5152] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eb3371d08a' Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.912 [INFO][5152] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.915 [INFO][5152] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.918 [INFO][5152] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.919 [INFO][5152] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.920 [INFO][5152] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.921 [INFO][5152] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.921 [INFO][5152] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3 Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.924 [INFO][5152] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.927 [INFO][5152] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.195/26] block=192.168.53.192/26 handle="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.927 [INFO][5152] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.195/26] handle="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.927 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:55.935303 containerd[1825]: 2025-01-29 12:23:55.927 [INFO][5152] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.195/26] IPv6=[] ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" HandleID="k8s-pod-network.8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.935731 containerd[1825]: 2025-01-29 12:23:55.928 [INFO][5109] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262d59b9-71cb-45de-b97d-563bbb9e2a62", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"", Pod:"csi-node-driver-whf72", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali612a3150fe1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:55.935731 containerd[1825]: 2025-01-29 12:23:55.928 [INFO][5109] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.195/32] ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.935731 containerd[1825]: 2025-01-29 12:23:55.928 [INFO][5109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali612a3150fe1 ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.935731 containerd[1825]: 2025-01-29 12:23:55.929 [INFO][5109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.935731 containerd[1825]: 2025-01-29 12:23:55.929 [INFO][5109] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262d59b9-71cb-45de-b97d-563bbb9e2a62", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3", Pod:"csi-node-driver-whf72", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali612a3150fe1", MAC:"fa:cb:1a:1d:9e:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:55.935731 containerd[1825]: 2025-01-29 12:23:55.934 [INFO][5109] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3" Namespace="calico-system" Pod="csi-node-driver-whf72" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:23:55.944363 containerd[1825]: time="2025-01-29T12:23:55.944275532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:55.944490 containerd[1825]: time="2025-01-29T12:23:55.944474403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:55.944511 containerd[1825]: time="2025-01-29T12:23:55.944486577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:55.944554 containerd[1825]: time="2025-01-29T12:23:55.944530734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:55.963871 systemd[1]: Started cri-containerd-8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3.scope - libcontainer container 8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3. Jan 29 12:23:55.975472 containerd[1825]: time="2025-01-29T12:23:55.975400508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whf72,Uid:262d59b9-71cb-45de-b97d-563bbb9e2a62,Namespace:calico-system,Attempt:1,} returns sandbox id \"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3\"" Jan 29 12:23:56.623664 systemd-networkd[1609]: calia11af41b6e2: Gained IPv6LL Jan 29 12:23:56.725415 containerd[1825]: time="2025-01-29T12:23:56.725371961Z" level=info msg="StopPodSandbox for \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\"" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.749 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.749 [INFO][5360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" iface="eth0" netns="/var/run/netns/cni-ba89a976-abe5-c6e4-c8ef-fbd43afe468f" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.749 [INFO][5360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" iface="eth0" netns="/var/run/netns/cni-ba89a976-abe5-c6e4-c8ef-fbd43afe468f" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.749 [INFO][5360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" iface="eth0" netns="/var/run/netns/cni-ba89a976-abe5-c6e4-c8ef-fbd43afe468f" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.749 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.749 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.760 [INFO][5376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.760 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.760 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.763 [WARNING][5376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.764 [INFO][5376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.764 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:56.766138 containerd[1825]: 2025-01-29 12:23:56.765 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:23:56.766439 containerd[1825]: time="2025-01-29T12:23:56.766182998Z" level=info msg="TearDown network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\" successfully" Jan 29 12:23:56.766439 containerd[1825]: time="2025-01-29T12:23:56.766199267Z" level=info msg="StopPodSandbox for \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\" returns successfully" Jan 29 12:23:56.766639 containerd[1825]: time="2025-01-29T12:23:56.766565739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bb5789b9c-zwg2m,Uid:47e07d70-9f81-457c-b809-e3461b33a8b3,Namespace:calico-system,Attempt:1,}" Jan 29 12:23:56.783416 systemd[1]: run-netns-cni\x2dba89a976\x2dabe5\x2dc6e4\x2dc8ef\x2dfbd43afe468f.mount: Deactivated successfully. Jan 29 12:23:56.829098 kubelet[3066]: I0129 12:23:56.829046 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hhhb4" podStartSLOduration=27.829030971 podStartE2EDuration="27.829030971s" podCreationTimestamp="2025-01-29 12:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:23:56.82897922 +0000 UTC m=+32.147435643" watchObservedRunningTime="2025-01-29 12:23:56.829030971 +0000 UTC m=+32.147487392" Jan 29 12:23:56.830621 systemd-networkd[1609]: calia3a220ed64f: Link UP Jan 29 12:23:56.830758 systemd-networkd[1609]: calia3a220ed64f: Gained carrier Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.790 [INFO][5392] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0 calico-kube-controllers-bb5789b9c- calico-system 47e07d70-9f81-457c-b809-e3461b33a8b3 749 0 2025-01-29 12:23:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bb5789b9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-eb3371d08a calico-kube-controllers-bb5789b9c-zwg2m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia3a220ed64f [] []}} ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.790 [INFO][5392] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.805 [INFO][5417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" HandleID="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.811 [INFO][5417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" HandleID="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006a7c00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-eb3371d08a", "pod":"calico-kube-controllers-bb5789b9c-zwg2m", "timestamp":"2025-01-29 12:23:56.805349273 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eb3371d08a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.811 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.811 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.811 [INFO][5417] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eb3371d08a' Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.813 [INFO][5417] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.815 [INFO][5417] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.818 [INFO][5417] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.819 [INFO][5417] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.821 [INFO][5417] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.821 [INFO][5417] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.822 [INFO][5417] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0 Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.824 [INFO][5417] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.828 [INFO][5417] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.196/26] block=192.168.53.192/26 handle="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.828 [INFO][5417] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.196/26] handle="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.828 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:56.835810 containerd[1825]: 2025-01-29 12:23:56.828 [INFO][5417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.196/26] IPv6=[] ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" HandleID="k8s-pod-network.f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.836233 containerd[1825]: 2025-01-29 12:23:56.829 [INFO][5392] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0", GenerateName:"calico-kube-controllers-bb5789b9c-", Namespace:"calico-system", SelfLink:"", UID:"47e07d70-9f81-457c-b809-e3461b33a8b3", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bb5789b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"", Pod:"calico-kube-controllers-bb5789b9c-zwg2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3a220ed64f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:56.836233 containerd[1825]: 2025-01-29 12:23:56.829 [INFO][5392] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.196/32] ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.836233 containerd[1825]: 2025-01-29 12:23:56.829 [INFO][5392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3a220ed64f ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.836233 containerd[1825]: 2025-01-29 12:23:56.830 [INFO][5392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.836233 containerd[1825]: 2025-01-29 12:23:56.830 [INFO][5392] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0", GenerateName:"calico-kube-controllers-bb5789b9c-", Namespace:"calico-system", SelfLink:"", UID:"47e07d70-9f81-457c-b809-e3461b33a8b3", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bb5789b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0", Pod:"calico-kube-controllers-bb5789b9c-zwg2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3a220ed64f", MAC:"fa:6d:b9:bc:a6:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:56.836233 containerd[1825]: 2025-01-29 12:23:56.834 [INFO][5392] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0" Namespace="calico-system" Pod="calico-kube-controllers-bb5789b9c-zwg2m" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:23:56.949797 containerd[1825]: time="2025-01-29T12:23:56.949695392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:56.949797 containerd[1825]: time="2025-01-29T12:23:56.949720821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:56.949797 containerd[1825]: time="2025-01-29T12:23:56.949727594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:56.949797 containerd[1825]: time="2025-01-29T12:23:56.949779589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:56.973691 systemd[1]: Started cri-containerd-f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0.scope - libcontainer container f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0. Jan 29 12:23:56.999614 containerd[1825]: time="2025-01-29T12:23:56.999553410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bb5789b9c-zwg2m,Uid:47e07d70-9f81-457c-b809-e3461b33a8b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0\"" Jan 29 12:23:57.007596 systemd-networkd[1609]: cali612a3150fe1: Gained IPv6LL Jan 29 12:23:57.199651 systemd-networkd[1609]: caliadc0e5b0eae: Gained IPv6LL Jan 29 12:23:57.215250 containerd[1825]: time="2025-01-29T12:23:57.215202645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:57.215449 containerd[1825]: time="2025-01-29T12:23:57.215431154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:23:57.215839 containerd[1825]: time="2025-01-29T12:23:57.215799137Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:57.216774 containerd[1825]: time="2025-01-29T12:23:57.216733857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:57.217527 containerd[1825]: time="2025-01-29T12:23:57.217490741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.302925231s" Jan 29 12:23:57.217527 containerd[1825]: time="2025-01-29T12:23:57.217506099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:23:57.217956 containerd[1825]: time="2025-01-29T12:23:57.217918021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:23:57.218458 containerd[1825]: time="2025-01-29T12:23:57.218421003Z" level=info msg="CreateContainer within sandbox \"20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:23:57.222573 containerd[1825]: time="2025-01-29T12:23:57.222559024Z" level=info msg="CreateContainer within sandbox \"20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d3acb1b8003b92dc5b86d93a038c06986461006d582b8fbb6c1a81e72112db9\"" Jan 29 12:23:57.222844 containerd[1825]: time="2025-01-29T12:23:57.222803853Z" level=info msg="StartContainer for \"6d3acb1b8003b92dc5b86d93a038c06986461006d582b8fbb6c1a81e72112db9\"" Jan 29 12:23:57.251727 systemd[1]: Started cri-containerd-6d3acb1b8003b92dc5b86d93a038c06986461006d582b8fbb6c1a81e72112db9.scope - libcontainer container 6d3acb1b8003b92dc5b86d93a038c06986461006d582b8fbb6c1a81e72112db9. Jan 29 12:23:57.290637 containerd[1825]: time="2025-01-29T12:23:57.290610277Z" level=info msg="StartContainer for \"6d3acb1b8003b92dc5b86d93a038c06986461006d582b8fbb6c1a81e72112db9\" returns successfully" Jan 29 12:23:57.843979 kubelet[3066]: I0129 12:23:57.843946 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5d654d57-hvmzk" podStartSLOduration=21.540456802 podStartE2EDuration="23.84393259s" podCreationTimestamp="2025-01-29 12:23:34 +0000 UTC" firstStartedPulling="2025-01-29 12:23:54.91438228 +0000 UTC m=+30.232838708" lastFinishedPulling="2025-01-29 12:23:57.217858076 +0000 UTC m=+32.536314496" observedRunningTime="2025-01-29 12:23:57.843813444 +0000 UTC m=+33.162269867" watchObservedRunningTime="2025-01-29 12:23:57.84393259 +0000 UTC m=+33.162389009" Jan 29 12:23:58.223862 systemd-networkd[1609]: calia3a220ed64f: Gained IPv6LL Jan 29 12:23:58.722945 containerd[1825]: time="2025-01-29T12:23:58.722828194Z" level=info msg="StopPodSandbox for \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\"" Jan 29 12:23:58.724154 containerd[1825]: time="2025-01-29T12:23:58.722844062Z" level=info msg="StopPodSandbox for \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\"" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5583] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" iface="eth0" netns="/var/run/netns/cni-397b0a50-805c-a5d2-ad05-061ed96bbf4c" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5583] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" iface="eth0" netns="/var/run/netns/cni-397b0a50-805c-a5d2-ad05-061ed96bbf4c" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5583] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" iface="eth0" netns="/var/run/netns/cni-397b0a50-805c-a5d2-ad05-061ed96bbf4c" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.761 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.761 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.761 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.765 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.765 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.766 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:58.767338 containerd[1825]: 2025-01-29 12:23:58.766 [INFO][5583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:23:58.767678 containerd[1825]: time="2025-01-29T12:23:58.767387919Z" level=info msg="TearDown network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\" successfully" Jan 29 12:23:58.767678 containerd[1825]: time="2025-01-29T12:23:58.767414102Z" level=info msg="StopPodSandbox for \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\" returns successfully" Jan 29 12:23:58.767873 containerd[1825]: time="2025-01-29T12:23:58.767829950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq4x8,Uid:be14fd9c-84ab-4c38-a4a3-407c709a8f57,Namespace:kube-system,Attempt:1,}" Jan 29 12:23:58.769197 systemd[1]: run-netns-cni\x2d397b0a50\x2d805c\x2da5d2\x2dad05\x2d061ed96bbf4c.mount: Deactivated successfully. Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5582] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" iface="eth0" netns="/var/run/netns/cni-346a0390-7132-ac57-041a-8a4dcd2db046" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" iface="eth0" netns="/var/run/netns/cni-346a0390-7132-ac57-041a-8a4dcd2db046" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" iface="eth0" netns="/var/run/netns/cni-346a0390-7132-ac57-041a-8a4dcd2db046" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5582] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.749 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.761 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.761 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.766 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.769 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.769 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.770 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:58.771747 containerd[1825]: 2025-01-29 12:23:58.770 [INFO][5582] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:23:58.772097 containerd[1825]: time="2025-01-29T12:23:58.771831896Z" level=info msg="TearDown network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\" successfully" Jan 29 12:23:58.772097 containerd[1825]: time="2025-01-29T12:23:58.771865179Z" level=info msg="StopPodSandbox for \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\" returns successfully" Jan 29 12:23:58.772308 containerd[1825]: time="2025-01-29T12:23:58.772275038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-6z9pz,Uid:82e8b816-5e3d-47c1-8fd2-31578a33896d,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:23:58.775756 systemd[1]: run-netns-cni\x2d346a0390\x2d7132\x2dac57\x2d041a\x2d8a4dcd2db046.mount: Deactivated successfully. Jan 29 12:23:58.826551 systemd-networkd[1609]: cali71b310edae6: Link UP Jan 29 12:23:58.826656 systemd-networkd[1609]: cali71b310edae6: Gained carrier Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.791 [INFO][5648] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0 coredns-6f6b679f8f- kube-system be14fd9c-84ab-4c38-a4a3-407c709a8f57 774 0 2025-01-29 12:23:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-eb3371d08a coredns-6f6b679f8f-lq4x8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71b310edae6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.791 [INFO][5648] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.807 [INFO][5692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" HandleID="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.812 [INFO][5692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" HandleID="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364540), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-eb3371d08a", "pod":"coredns-6f6b679f8f-lq4x8", "timestamp":"2025-01-29 12:23:58.80745367 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eb3371d08a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.812 [INFO][5692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.812 [INFO][5692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.812 [INFO][5692] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eb3371d08a' Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.813 [INFO][5692] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.814 [INFO][5692] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.816 [INFO][5692] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.817 [INFO][5692] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.818 [INFO][5692] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.818 [INFO][5692] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.819 [INFO][5692] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21 Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.821 [INFO][5692] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.824 [INFO][5692] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.197/26] block=192.168.53.192/26 handle="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.824 [INFO][5692] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.197/26] handle="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.824 [INFO][5692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:58.832403 containerd[1825]: 2025-01-29 12:23:58.824 [INFO][5692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.197/26] IPv6=[] ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" HandleID="k8s-pod-network.61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.832845 containerd[1825]: 2025-01-29 12:23:58.825 [INFO][5648] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be14fd9c-84ab-4c38-a4a3-407c709a8f57", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"", Pod:"coredns-6f6b679f8f-lq4x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71b310edae6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:58.832845 containerd[1825]: 2025-01-29 12:23:58.825 [INFO][5648] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.197/32] ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.832845 containerd[1825]: 2025-01-29 12:23:58.825 [INFO][5648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71b310edae6 ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.832845 containerd[1825]: 2025-01-29 12:23:58.826 [INFO][5648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.832845 containerd[1825]: 2025-01-29 12:23:58.826 [INFO][5648] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be14fd9c-84ab-4c38-a4a3-407c709a8f57", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21", Pod:"coredns-6f6b679f8f-lq4x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71b310edae6", MAC:"6e:68:18:26:68:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:58.832845 containerd[1825]: 2025-01-29 12:23:58.831 [INFO][5648] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21" Namespace="kube-system" Pod="coredns-6f6b679f8f-lq4x8" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:23:58.838902 kubelet[3066]: I0129 12:23:58.838887 3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:23:58.857273 containerd[1825]: time="2025-01-29T12:23:58.857234834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:58.857273 containerd[1825]: time="2025-01-29T12:23:58.857266316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:58.857372 containerd[1825]: time="2025-01-29T12:23:58.857277886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:58.857372 containerd[1825]: time="2025-01-29T12:23:58.857333640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:58.864972 containerd[1825]: time="2025-01-29T12:23:58.864948167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:58.865199 containerd[1825]: time="2025-01-29T12:23:58.865175422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:23:58.865493 containerd[1825]: time="2025-01-29T12:23:58.865480657Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:58.866572 containerd[1825]: time="2025-01-29T12:23:58.866558167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:23:58.866963 containerd[1825]: time="2025-01-29T12:23:58.866948722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.649015969s" Jan 29 12:23:58.867003 containerd[1825]: time="2025-01-29T12:23:58.866964611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:23:58.867362 containerd[1825]: time="2025-01-29T12:23:58.867351622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:23:58.867881 containerd[1825]: time="2025-01-29T12:23:58.867864656Z" level=info msg="CreateContainer within sandbox \"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:23:58.872945 containerd[1825]: time="2025-01-29T12:23:58.872903069Z" level=info msg="CreateContainer within sandbox \"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4c56ce295c1001f22ca5a14c85f57fc7b61bfefba4af9fe7d4940b0ba9e930db\"" Jan 29 12:23:58.873176 containerd[1825]: time="2025-01-29T12:23:58.873140493Z" level=info msg="StartContainer for \"4c56ce295c1001f22ca5a14c85f57fc7b61bfefba4af9fe7d4940b0ba9e930db\"" Jan 29 12:23:58.879769 systemd[1]: Started cri-containerd-61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21.scope - libcontainer container 61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21. Jan 29 12:23:58.885713 systemd[1]: Started cri-containerd-4c56ce295c1001f22ca5a14c85f57fc7b61bfefba4af9fe7d4940b0ba9e930db.scope - libcontainer container 4c56ce295c1001f22ca5a14c85f57fc7b61bfefba4af9fe7d4940b0ba9e930db. Jan 29 12:23:58.898775 containerd[1825]: time="2025-01-29T12:23:58.898753776Z" level=info msg="StartContainer for \"4c56ce295c1001f22ca5a14c85f57fc7b61bfefba4af9fe7d4940b0ba9e930db\" returns successfully" Jan 29 12:23:58.903650 containerd[1825]: time="2025-01-29T12:23:58.903628206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lq4x8,Uid:be14fd9c-84ab-4c38-a4a3-407c709a8f57,Namespace:kube-system,Attempt:1,} returns sandbox id \"61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21\"" Jan 29 12:23:58.904597 containerd[1825]: time="2025-01-29T12:23:58.904585817Z" level=info msg="CreateContainer within sandbox \"61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:23:58.908631 containerd[1825]: time="2025-01-29T12:23:58.908619212Z" level=info msg="CreateContainer within sandbox \"61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9cde577407fea358a3bc8d58b2e3502a27fbbbb437798d2e275b8f468e897ad\"" Jan 29 12:23:58.908774 containerd[1825]: time="2025-01-29T12:23:58.908763123Z" level=info msg="StartContainer for \"f9cde577407fea358a3bc8d58b2e3502a27fbbbb437798d2e275b8f468e897ad\"" Jan 29 12:23:58.927136 systemd-networkd[1609]: caliccd8eed4b22: Link UP Jan 29 12:23:58.927254 systemd-networkd[1609]: caliccd8eed4b22: Gained carrier Jan 29 12:23:58.927658 systemd[1]: Started cri-containerd-f9cde577407fea358a3bc8d58b2e3502a27fbbbb437798d2e275b8f468e897ad.scope - libcontainer container f9cde577407fea358a3bc8d58b2e3502a27fbbbb437798d2e275b8f468e897ad. Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.794 [INFO][5659] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0 calico-apiserver-c5d654d57- calico-apiserver 82e8b816-5e3d-47c1-8fd2-31578a33896d 773 0 2025-01-29 12:23:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c5d654d57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-eb3371d08a calico-apiserver-c5d654d57-6z9pz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccd8eed4b22 [] []}} ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.794 [INFO][5659] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.809 [INFO][5697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" HandleID="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.813 [INFO][5697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" HandleID="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bf9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-eb3371d08a", "pod":"calico-apiserver-c5d654d57-6z9pz", "timestamp":"2025-01-29 12:23:58.80946725 +0000 UTC"}, Hostname:"ci-4081.3.0-a-eb3371d08a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.813 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.824 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.824 [INFO][5697] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-eb3371d08a' Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.913 [INFO][5697] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.915 [INFO][5697] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.918 [INFO][5697] ipam/ipam.go 489: Trying affinity for 192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.919 [INFO][5697] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.920 [INFO][5697] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.920 [INFO][5697] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.921 [INFO][5697] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2 Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.923 [INFO][5697] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.925 [INFO][5697] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.198/26] block=192.168.53.192/26 handle="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.925 [INFO][5697] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.198/26] handle="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" host="ci-4081.3.0-a-eb3371d08a" Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.925 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:23:58.932517 containerd[1825]: 2025-01-29 12:23:58.925 [INFO][5697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.198/26] IPv6=[] ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" HandleID="k8s-pod-network.57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.932941 containerd[1825]: 2025-01-29 12:23:58.926 [INFO][5659] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"82e8b816-5e3d-47c1-8fd2-31578a33896d", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"", Pod:"calico-apiserver-c5d654d57-6z9pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccd8eed4b22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:58.932941 containerd[1825]: 2025-01-29 12:23:58.926 [INFO][5659] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.198/32] ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.932941 containerd[1825]: 2025-01-29 12:23:58.926 [INFO][5659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccd8eed4b22 ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.932941 containerd[1825]: 2025-01-29 12:23:58.927 [INFO][5659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.932941 containerd[1825]: 2025-01-29 12:23:58.927 [INFO][5659] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"82e8b816-5e3d-47c1-8fd2-31578a33896d", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2", Pod:"calico-apiserver-c5d654d57-6z9pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccd8eed4b22", MAC:"b2:df:96:00:dd:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:23:58.932941 containerd[1825]: 2025-01-29 12:23:58.931 [INFO][5659] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2" Namespace="calico-apiserver" Pod="calico-apiserver-c5d654d57-6z9pz" WorkloadEndpoint="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:23:58.939534 containerd[1825]: time="2025-01-29T12:23:58.939504303Z" level=info msg="StartContainer for \"f9cde577407fea358a3bc8d58b2e3502a27fbbbb437798d2e275b8f468e897ad\" returns successfully" Jan 29 12:23:58.941976 containerd[1825]: time="2025-01-29T12:23:58.941927516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:23:58.942180 containerd[1825]: time="2025-01-29T12:23:58.942164028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:23:58.942220 containerd[1825]: time="2025-01-29T12:23:58.942180621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:58.942289 containerd[1825]: time="2025-01-29T12:23:58.942274993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:23:58.962966 systemd[1]: Started cri-containerd-57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2.scope - libcontainer container 57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2. Jan 29 12:23:58.985125 containerd[1825]: time="2025-01-29T12:23:58.985067086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c5d654d57-6z9pz,Uid:82e8b816-5e3d-47c1-8fd2-31578a33896d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2\"" Jan 29 12:23:58.986271 containerd[1825]: time="2025-01-29T12:23:58.986257540Z" level=info msg="CreateContainer within sandbox \"57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:23:58.990491 containerd[1825]: time="2025-01-29T12:23:58.990476822Z" level=info msg="CreateContainer within sandbox \"57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4b39790210fd498900c9933b43a81609524adde81a3d3f0130165db1a596278e\"" Jan 29 12:23:58.990755 containerd[1825]: time="2025-01-29T12:23:58.990705996Z" level=info msg="StartContainer for \"4b39790210fd498900c9933b43a81609524adde81a3d3f0130165db1a596278e\"" Jan 29 12:23:59.010905 systemd[1]: Started cri-containerd-4b39790210fd498900c9933b43a81609524adde81a3d3f0130165db1a596278e.scope - libcontainer container 4b39790210fd498900c9933b43a81609524adde81a3d3f0130165db1a596278e. Jan 29 12:23:59.051187 containerd[1825]: time="2025-01-29T12:23:59.051127435Z" level=info msg="StartContainer for \"4b39790210fd498900c9933b43a81609524adde81a3d3f0130165db1a596278e\" returns successfully" Jan 29 12:23:59.868711 kubelet[3066]: I0129 12:23:59.868569 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c5d654d57-6z9pz" podStartSLOduration=25.868491984 podStartE2EDuration="25.868491984s" podCreationTimestamp="2025-01-29 12:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:23:59.86791689 +0000 UTC m=+35.186373381" watchObservedRunningTime="2025-01-29 12:23:59.868491984 +0000 UTC m=+35.186948467" Jan 29 12:23:59.889468 kubelet[3066]: I0129 12:23:59.889424 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lq4x8" podStartSLOduration=30.889411404 podStartE2EDuration="30.889411404s" podCreationTimestamp="2025-01-29 12:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:23:59.889027248 +0000 UTC m=+35.207483689" watchObservedRunningTime="2025-01-29 12:23:59.889411404 +0000 UTC m=+35.207867827" Jan 29 12:24:00.018281 systemd-networkd[1609]: caliccd8eed4b22: Gained IPv6LL Jan 29 12:24:00.451653 containerd[1825]: time="2025-01-29T12:24:00.451626905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:24:00.451915 containerd[1825]: time="2025-01-29T12:24:00.451819296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:24:00.452177 containerd[1825]: time="2025-01-29T12:24:00.452164439Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:24:00.453099 containerd[1825]: time="2025-01-29T12:24:00.453085864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:24:00.453520 containerd[1825]: time="2025-01-29T12:24:00.453506552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.586141116s" Jan 29 12:24:00.453568 containerd[1825]: time="2025-01-29T12:24:00.453524641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:24:00.454011 containerd[1825]: time="2025-01-29T12:24:00.454000991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:24:00.456914 containerd[1825]: time="2025-01-29T12:24:00.456898553Z" level=info msg="CreateContainer within sandbox \"f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:24:00.460960 containerd[1825]: time="2025-01-29T12:24:00.460942118Z" level=info msg="CreateContainer within sandbox \"f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"55191e98de07752bc14ca6f4b09923c7d415b28857df4589d477624eefdac7c3\"" Jan 29 12:24:00.461141 containerd[1825]: time="2025-01-29T12:24:00.461126149Z" level=info msg="StartContainer for \"55191e98de07752bc14ca6f4b09923c7d415b28857df4589d477624eefdac7c3\"" Jan 29 12:24:00.490802 systemd[1]: Started cri-containerd-55191e98de07752bc14ca6f4b09923c7d415b28857df4589d477624eefdac7c3.scope - libcontainer container 55191e98de07752bc14ca6f4b09923c7d415b28857df4589d477624eefdac7c3. Jan 29 12:24:00.516353 containerd[1825]: time="2025-01-29T12:24:00.516328122Z" level=info msg="StartContainer for \"55191e98de07752bc14ca6f4b09923c7d415b28857df4589d477624eefdac7c3\" returns successfully" Jan 29 12:24:00.655871 systemd-networkd[1609]: cali71b310edae6: Gained IPv6LL Jan 29 12:24:00.855853 kubelet[3066]: I0129 12:24:00.855836 3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:24:00.864575 kubelet[3066]: I0129 12:24:00.864518 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bb5789b9c-zwg2m" podStartSLOduration=23.410657529 podStartE2EDuration="26.86450266s" podCreationTimestamp="2025-01-29 12:23:34 +0000 UTC" firstStartedPulling="2025-01-29 12:23:57.000103277 +0000 UTC m=+32.318559709" lastFinishedPulling="2025-01-29 12:24:00.453948414 +0000 UTC m=+35.772404840" observedRunningTime="2025-01-29 12:24:00.86384953 +0000 UTC m=+36.182305957" watchObservedRunningTime="2025-01-29 12:24:00.86450266 +0000 UTC m=+36.182959085" Jan 29 12:24:02.009924 containerd[1825]: time="2025-01-29T12:24:02.009867026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:24:02.010139 containerd[1825]: time="2025-01-29T12:24:02.010060527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:24:02.010450 containerd[1825]: time="2025-01-29T12:24:02.010439142Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:24:02.011448 containerd[1825]: time="2025-01-29T12:24:02.011407438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:24:02.011837 containerd[1825]: time="2025-01-29T12:24:02.011795831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.557780939s" Jan 29 12:24:02.011837 containerd[1825]: time="2025-01-29T12:24:02.011811738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:24:02.012785 containerd[1825]: time="2025-01-29T12:24:02.012769394Z" level=info msg="CreateContainer within sandbox \"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:24:02.018784 containerd[1825]: time="2025-01-29T12:24:02.018737009Z" level=info msg="CreateContainer within sandbox \"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4a1acab2df5d2773669c3cf032903f0389027f6d379ef993b6d588e6be3ca35a\"" Jan 29 12:24:02.019027 containerd[1825]: time="2025-01-29T12:24:02.019010064Z" level=info msg="StartContainer for \"4a1acab2df5d2773669c3cf032903f0389027f6d379ef993b6d588e6be3ca35a\"" Jan 29 12:24:02.048929 systemd[1]: Started cri-containerd-4a1acab2df5d2773669c3cf032903f0389027f6d379ef993b6d588e6be3ca35a.scope - libcontainer container 4a1acab2df5d2773669c3cf032903f0389027f6d379ef993b6d588e6be3ca35a. Jan 29 12:24:02.070072 containerd[1825]: time="2025-01-29T12:24:02.070039284Z" level=info msg="StartContainer for \"4a1acab2df5d2773669c3cf032903f0389027f6d379ef993b6d588e6be3ca35a\" returns successfully" Jan 29 12:24:02.758664 kubelet[3066]: I0129 12:24:02.758601 3066 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:24:02.758664 kubelet[3066]: I0129 12:24:02.758673 3066 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:24:02.895031 kubelet[3066]: I0129 12:24:02.894899 3066 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-whf72" podStartSLOduration=22.858730453 podStartE2EDuration="28.894848727s" podCreationTimestamp="2025-01-29 12:23:34 +0000 UTC" firstStartedPulling="2025-01-29 12:23:55.976089301 +0000 UTC m=+31.294545731" lastFinishedPulling="2025-01-29 12:24:02.012207583 +0000 UTC m=+37.330664005" observedRunningTime="2025-01-29 12:24:02.893856031 +0000 UTC m=+38.212312522" watchObservedRunningTime="2025-01-29 12:24:02.894848727 +0000 UTC m=+38.213305211" Jan 29 12:24:04.054492 kubelet[3066]: I0129 12:24:04.054363 3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:24:23.712875 kubelet[3066]: I0129 12:24:23.712679 3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:24:24.720931 containerd[1825]: time="2025-01-29T12:24:24.720910647Z" level=info msg="StopPodSandbox for \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\"" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.740 [WARNING][6185] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"82e8b816-5e3d-47c1-8fd2-31578a33896d", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2", Pod:"calico-apiserver-c5d654d57-6z9pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccd8eed4b22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.740 [INFO][6185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.740 [INFO][6185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" iface="eth0" netns="" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.740 [INFO][6185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.740 [INFO][6185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.751 [INFO][6197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.751 [INFO][6197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.751 [INFO][6197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.755 [WARNING][6197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.755 [INFO][6197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.757 [INFO][6197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:24.758347 containerd[1825]: 2025-01-29 12:24:24.757 [INFO][6185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.758679 containerd[1825]: time="2025-01-29T12:24:24.758375477Z" level=info msg="TearDown network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\" successfully" Jan 29 12:24:24.758679 containerd[1825]: time="2025-01-29T12:24:24.758398477Z" level=info msg="StopPodSandbox for \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\" returns successfully" Jan 29 12:24:24.758800 containerd[1825]: time="2025-01-29T12:24:24.758754425Z" level=info msg="RemovePodSandbox for \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\"" Jan 29 12:24:24.758800 containerd[1825]: time="2025-01-29T12:24:24.758773265Z" level=info msg="Forcibly stopping sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\"" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.778 [WARNING][6226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"82e8b816-5e3d-47c1-8fd2-31578a33896d", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"57c08d8f71bd503ee29711d3d359f6bbd707cae1978527ff65405aedd6a5cfe2", Pod:"calico-apiserver-c5d654d57-6z9pz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccd8eed4b22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.778 [INFO][6226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.778 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" iface="eth0" netns="" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.778 [INFO][6226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.778 [INFO][6226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.790 [INFO][6242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.790 [INFO][6242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.790 [INFO][6242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.794 [WARNING][6242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.794 [INFO][6242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" HandleID="k8s-pod-network.bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--6z9pz-eth0" Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.796 [INFO][6242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:24.797494 containerd[1825]: 2025-01-29 12:24:24.796 [INFO][6226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8" Jan 29 12:24:24.797859 containerd[1825]: time="2025-01-29T12:24:24.797492488Z" level=info msg="TearDown network for sandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\" successfully" Jan 29 12:24:24.799076 containerd[1825]: time="2025-01-29T12:24:24.799017834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:24:24.799076 containerd[1825]: time="2025-01-29T12:24:24.799070510Z" level=info msg="RemovePodSandbox \"bda912a49c7b1366dae97180baa291d93935c1d162aa4bfd88a267a1030717d8\" returns successfully" Jan 29 12:24:24.799476 containerd[1825]: time="2025-01-29T12:24:24.799463999Z" level=info msg="StopPodSandbox for \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\"" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.818 [WARNING][6267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2246e19b-2c9c-4027-a711-4fb712a9ea9e", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85", Pod:"coredns-6f6b679f8f-hhhb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadc0e5b0eae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.819 [INFO][6267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.819 [INFO][6267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" iface="eth0" netns="" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.819 [INFO][6267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.819 [INFO][6267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.829 [INFO][6279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.829 [INFO][6279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.829 [INFO][6279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.833 [WARNING][6279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.833 [INFO][6279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.834 [INFO][6279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:24.835490 containerd[1825]: 2025-01-29 12:24:24.834 [INFO][6267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.835490 containerd[1825]: time="2025-01-29T12:24:24.835484737Z" level=info msg="TearDown network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\" successfully" Jan 29 12:24:24.835846 containerd[1825]: time="2025-01-29T12:24:24.835500907Z" level=info msg="StopPodSandbox for \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\" returns successfully" Jan 29 12:24:24.835846 containerd[1825]: time="2025-01-29T12:24:24.835798936Z" level=info msg="RemovePodSandbox for \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\"" Jan 29 12:24:24.835846 containerd[1825]: time="2025-01-29T12:24:24.835817527Z" level=info msg="Forcibly stopping sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\"" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.855 [WARNING][6305] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2246e19b-2c9c-4027-a711-4fb712a9ea9e", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"2e98c3b0d323c91b6ac7c06908f604a89847858a9eace74ad80686ca8ac66a85", Pod:"coredns-6f6b679f8f-hhhb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliadc0e5b0eae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.855 [INFO][6305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.855 [INFO][6305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" iface="eth0" netns="" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.855 [INFO][6305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.855 [INFO][6305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.865 [INFO][6320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.865 [INFO][6320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.865 [INFO][6320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.869 [WARNING][6320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.869 [INFO][6320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" HandleID="k8s-pod-network.56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--hhhb4-eth0" Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.870 [INFO][6320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:24.871938 containerd[1825]: 2025-01-29 12:24:24.871 [INFO][6305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba" Jan 29 12:24:24.872452 containerd[1825]: time="2025-01-29T12:24:24.871966522Z" level=info msg="TearDown network for sandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\" successfully" Jan 29 12:24:24.873593 containerd[1825]: time="2025-01-29T12:24:24.873578602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:24:24.873641 containerd[1825]: time="2025-01-29T12:24:24.873610074Z" level=info msg="RemovePodSandbox \"56b8d1b39a7e57bfdaf480eac018985b10ad1207bcbd4637f77525f9ace3ecba\" returns successfully" Jan 29 12:24:24.873923 containerd[1825]: time="2025-01-29T12:24:24.873910665Z" level=info msg="StopPodSandbox for \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\"" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.891 [WARNING][6351] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be14fd9c-84ab-4c38-a4a3-407c709a8f57", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21", Pod:"coredns-6f6b679f8f-lq4x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71b310edae6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.892 [INFO][6351] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.892 [INFO][6351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" iface="eth0" netns="" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.892 [INFO][6351] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.892 [INFO][6351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.902 [INFO][6364] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.902 [INFO][6364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.902 [INFO][6364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.905 [WARNING][6364] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.905 [INFO][6364] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.906 [INFO][6364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:24.908054 containerd[1825]: 2025-01-29 12:24:24.907 [INFO][6351] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.908400 containerd[1825]: time="2025-01-29T12:24:24.908077207Z" level=info msg="TearDown network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\" successfully" Jan 29 12:24:24.908400 containerd[1825]: time="2025-01-29T12:24:24.908096390Z" level=info msg="StopPodSandbox for \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\" returns successfully" Jan 29 12:24:24.908400 containerd[1825]: time="2025-01-29T12:24:24.908368078Z" level=info msg="RemovePodSandbox for \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\"" Jan 29 12:24:24.908400 containerd[1825]: time="2025-01-29T12:24:24.908386386Z" level=info msg="Forcibly stopping sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\"" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.927 [WARNING][6393] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be14fd9c-84ab-4c38-a4a3-407c709a8f57", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"61d6e64f08562faa2a290eda31cf4008fd9014ae8933edef1a3d0963598d2b21", Pod:"coredns-6f6b679f8f-lq4x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71b310edae6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.927 [INFO][6393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.927 [INFO][6393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" iface="eth0" netns="" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.927 [INFO][6393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.927 [INFO][6393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.937 [INFO][6407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.937 [INFO][6407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.937 [INFO][6407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.941 [WARNING][6407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.941 [INFO][6407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" HandleID="k8s-pod-network.18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Workload="ci--4081.3.0--a--eb3371d08a-k8s-coredns--6f6b679f8f--lq4x8-eth0" Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.942 [INFO][6407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:24.943716 containerd[1825]: 2025-01-29 12:24:24.943 [INFO][6393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843" Jan 29 12:24:24.944092 containerd[1825]: time="2025-01-29T12:24:24.943726730Z" level=info msg="TearDown network for sandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\" successfully" Jan 29 12:24:24.945083 containerd[1825]: time="2025-01-29T12:24:24.945020125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:24:24.945083 containerd[1825]: time="2025-01-29T12:24:24.945041765Z" level=info msg="RemovePodSandbox \"18a43f1317dad72a182f572d97fedbb92679651aee5dbdab4acbdbd41b82b843\" returns successfully" Jan 29 12:24:24.945266 containerd[1825]: time="2025-01-29T12:24:24.945231480Z" level=info msg="StopPodSandbox for \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\"" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.964 [WARNING][6437] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1e7c23d-d84d-4ad1-b510-8659281971e0", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4", Pod:"calico-apiserver-c5d654d57-hvmzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia11af41b6e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.964 [INFO][6437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.964 [INFO][6437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" iface="eth0" netns="" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.964 [INFO][6437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.964 [INFO][6437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.976 [INFO][6450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.976 [INFO][6450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.976 [INFO][6450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.982 [WARNING][6450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.982 [INFO][6450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.986 [INFO][6450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:24.991738 containerd[1825]: 2025-01-29 12:24:24.988 [INFO][6437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:24.991738 containerd[1825]: time="2025-01-29T12:24:24.991684379Z" level=info msg="TearDown network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\" successfully" Jan 29 12:24:24.991738 containerd[1825]: time="2025-01-29T12:24:24.991740471Z" level=info msg="StopPodSandbox for \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\" returns successfully" Jan 29 12:24:24.993966 containerd[1825]: time="2025-01-29T12:24:24.992675318Z" level=info msg="RemovePodSandbox for \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\"" Jan 29 12:24:24.993966 containerd[1825]: time="2025-01-29T12:24:24.992744174Z" level=info msg="Forcibly stopping sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\"" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.057 [WARNING][6477] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0", GenerateName:"calico-apiserver-c5d654d57-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1e7c23d-d84d-4ad1-b510-8659281971e0", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c5d654d57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"20a84e8f579d8cf8631f5e7dc28b38c1accbaeb131bee177bcbd540cf4e53ea4", Pod:"calico-apiserver-c5d654d57-hvmzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia11af41b6e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.058 [INFO][6477] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.058 [INFO][6477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" iface="eth0" netns="" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.058 [INFO][6477] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.058 [INFO][6477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.076 [INFO][6493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.077 [INFO][6493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.077 [INFO][6493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.083 [WARNING][6493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.083 [INFO][6493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" HandleID="k8s-pod-network.a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--apiserver--c5d654d57--hvmzk-eth0" Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.084 [INFO][6493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:25.086818 containerd[1825]: 2025-01-29 12:24:25.085 [INFO][6477] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb" Jan 29 12:24:25.087329 containerd[1825]: time="2025-01-29T12:24:25.086850644Z" level=info msg="TearDown network for sandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\" successfully" Jan 29 12:24:25.088711 containerd[1825]: time="2025-01-29T12:24:25.088670869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:24:25.088711 containerd[1825]: time="2025-01-29T12:24:25.088695923Z" level=info msg="RemovePodSandbox \"a09d2c0c032f3d4214e2aa3e3cd81e1aebe2288b90c51c18f1b44e3e747a84bb\" returns successfully" Jan 29 12:24:25.088974 containerd[1825]: time="2025-01-29T12:24:25.088963517Z" level=info msg="StopPodSandbox for \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\"" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.107 [WARNING][6523] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262d59b9-71cb-45de-b97d-563bbb9e2a62", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3", Pod:"csi-node-driver-whf72", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali612a3150fe1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.107 [INFO][6523] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.107 [INFO][6523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" iface="eth0" netns="" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.107 [INFO][6523] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.107 [INFO][6523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.117 [INFO][6539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.117 [INFO][6539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.117 [INFO][6539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.120 [WARNING][6539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.120 [INFO][6539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.122 [INFO][6539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:25.123201 containerd[1825]: 2025-01-29 12:24:25.122 [INFO][6523] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.123504 containerd[1825]: time="2025-01-29T12:24:25.123232558Z" level=info msg="TearDown network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\" successfully" Jan 29 12:24:25.123504 containerd[1825]: time="2025-01-29T12:24:25.123247682Z" level=info msg="StopPodSandbox for \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\" returns successfully" Jan 29 12:24:25.123504 containerd[1825]: time="2025-01-29T12:24:25.123493594Z" level=info msg="RemovePodSandbox for \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\"" Jan 29 12:24:25.123566 containerd[1825]: time="2025-01-29T12:24:25.123508956Z" level=info msg="Forcibly stopping sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\"" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.141 [WARNING][6566] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"262d59b9-71cb-45de-b97d-563bbb9e2a62", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"8afd3d0ab31fc1d47b8c66fabea13503ce974e272045ab9687e96fef721370a3", Pod:"csi-node-driver-whf72", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali612a3150fe1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.141 [INFO][6566] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.141 [INFO][6566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" iface="eth0" netns="" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.141 [INFO][6566] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.141 [INFO][6566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.152 [INFO][6579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.152 [INFO][6579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.152 [INFO][6579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.156 [WARNING][6579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.156 [INFO][6579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" HandleID="k8s-pod-network.b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Workload="ci--4081.3.0--a--eb3371d08a-k8s-csi--node--driver--whf72-eth0" Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.157 [INFO][6579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:25.158948 containerd[1825]: 2025-01-29 12:24:25.158 [INFO][6566] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc" Jan 29 12:24:25.158948 containerd[1825]: time="2025-01-29T12:24:25.158942301Z" level=info msg="TearDown network for sandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\" successfully" Jan 29 12:24:25.160447 containerd[1825]: time="2025-01-29T12:24:25.160406697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:24:25.160447 containerd[1825]: time="2025-01-29T12:24:25.160431112Z" level=info msg="RemovePodSandbox \"b1c78bc33a759ce373f6746dd073326a8df70b45122cc0de8bf0b1fbda848ffc\" returns successfully" Jan 29 12:24:25.160714 containerd[1825]: time="2025-01-29T12:24:25.160671732Z" level=info msg="StopPodSandbox for \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\"" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.179 [WARNING][6609] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0", GenerateName:"calico-kube-controllers-bb5789b9c-", Namespace:"calico-system", SelfLink:"", UID:"47e07d70-9f81-457c-b809-e3461b33a8b3", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bb5789b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0", Pod:"calico-kube-controllers-bb5789b9c-zwg2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3a220ed64f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.179 [INFO][6609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.179 [INFO][6609] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" iface="eth0" netns="" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.179 [INFO][6609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.179 [INFO][6609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.191 [INFO][6625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.191 [INFO][6625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.192 [INFO][6625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.196 [WARNING][6625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.196 [INFO][6625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.197 [INFO][6625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:25.198859 containerd[1825]: 2025-01-29 12:24:25.198 [INFO][6609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.199202 containerd[1825]: time="2025-01-29T12:24:25.198848200Z" level=info msg="TearDown network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\" successfully" Jan 29 12:24:25.199202 containerd[1825]: time="2025-01-29T12:24:25.198875180Z" level=info msg="StopPodSandbox for \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\" returns successfully" Jan 29 12:24:25.199202 containerd[1825]: time="2025-01-29T12:24:25.199181157Z" level=info msg="RemovePodSandbox for \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\"" Jan 29 12:24:25.199261 containerd[1825]: time="2025-01-29T12:24:25.199202825Z" level=info msg="Forcibly stopping sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\"" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.221 [WARNING][6655] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0", GenerateName:"calico-kube-controllers-bb5789b9c-", Namespace:"calico-system", SelfLink:"", UID:"47e07d70-9f81-457c-b809-e3461b33a8b3", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 23, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bb5789b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-eb3371d08a", ContainerID:"f512b077d76f01a088753bf832285dbc559c56fa16bf509f94ec17b418056ef0", Pod:"calico-kube-controllers-bb5789b9c-zwg2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3a220ed64f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.221 [INFO][6655] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.221 [INFO][6655] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" iface="eth0" netns="" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.221 [INFO][6655] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.221 [INFO][6655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.237 [INFO][6668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.237 [INFO][6668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.237 [INFO][6668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.242 [WARNING][6668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.242 [INFO][6668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" HandleID="k8s-pod-network.652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Workload="ci--4081.3.0--a--eb3371d08a-k8s-calico--kube--controllers--bb5789b9c--zwg2m-eth0" Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.243 [INFO][6668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:24:25.245470 containerd[1825]: 2025-01-29 12:24:25.244 [INFO][6655] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e" Jan 29 12:24:25.245470 containerd[1825]: time="2025-01-29T12:24:25.245454377Z" level=info msg="TearDown network for sandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\" successfully" Jan 29 12:24:25.247057 containerd[1825]: time="2025-01-29T12:24:25.247043642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:24:25.247107 containerd[1825]: time="2025-01-29T12:24:25.247068541Z" level=info msg="RemovePodSandbox \"652b82772f84a7e77aff7b06b4cf0844b51c2377efa9852ed9d25cac0864fd7e\" returns successfully" Jan 29 12:26:33.231727 update_engine[1810]: I20250129 12:26:33.231669 1810 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 12:26:33.231727 update_engine[1810]: I20250129 12:26:33.231695 1810 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 12:26:33.232066 update_engine[1810]: I20250129 12:26:33.231786 1810 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 12:26:33.232108 update_engine[1810]: I20250129 12:26:33.232065 1810 omaha_request_params.cc:62] Current group set to lts Jan 29 12:26:33.232199 update_engine[1810]: I20250129 12:26:33.232143 1810 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 12:26:33.232199 update_engine[1810]: I20250129 12:26:33.232148 1810 update_attempter.cc:643] Scheduling an action processor start. Jan 29 12:26:33.232199 update_engine[1810]: I20250129 12:26:33.232156 1810 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:26:33.232199 update_engine[1810]: I20250129 12:26:33.232193 1810 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 12:26:33.232284 update_engine[1810]: I20250129 12:26:33.232234 1810 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:26:33.232284 update_engine[1810]: I20250129 12:26:33.232239 1810 omaha_request_action.cc:272] Request: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: Jan 29 12:26:33.232284 update_engine[1810]: I20250129 12:26:33.232241 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:26:33.232473 locksmithd[1860]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 12:26:33.233038 update_engine[1810]: I20250129 12:26:33.233000 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:26:33.233201 update_engine[1810]: I20250129 12:26:33.233161 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:26:33.234100 update_engine[1810]: E20250129 12:26:33.234071 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:26:33.234169 update_engine[1810]: I20250129 12:26:33.234103 1810 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 12:26:43.189982 update_engine[1810]: I20250129 12:26:43.189806 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:26:43.190995 update_engine[1810]: I20250129 12:26:43.190400 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:26:43.191117 update_engine[1810]: I20250129 12:26:43.190968 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:26:43.191750 update_engine[1810]: E20250129 12:26:43.191642 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:26:43.191962 update_engine[1810]: I20250129 12:26:43.191773 1810 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 12:26:53.190026 update_engine[1810]: I20250129 12:26:53.189903 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:26:53.191023 update_engine[1810]: I20250129 12:26:53.190459 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:26:53.191145 update_engine[1810]: I20250129 12:26:53.191011 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:26:53.191818 update_engine[1810]: E20250129 12:26:53.191697 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:26:53.192019 update_engine[1810]: I20250129 12:26:53.191845 1810 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 12:27:03.190256 update_engine[1810]: I20250129 12:27:03.190083 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:27:03.191289 update_engine[1810]: I20250129 12:27:03.190717 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:27:03.191289 update_engine[1810]: I20250129 12:27:03.191229 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:27:03.192163 update_engine[1810]: E20250129 12:27:03.192046 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:27:03.192476 update_engine[1810]: I20250129 12:27:03.192188 1810 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:27:03.192476 update_engine[1810]: I20250129 12:27:03.192217 1810 omaha_request_action.cc:617] Omaha request response: Jan 29 12:27:03.192476 update_engine[1810]: E20250129 12:27:03.192379 1810 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 12:27:03.192476 update_engine[1810]: I20250129 12:27:03.192428 1810 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 12:27:03.192476 update_engine[1810]: I20250129 12:27:03.192447 1810 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:27:03.192476 update_engine[1810]: I20250129 12:27:03.192463 1810 update_attempter.cc:306] Processing Done. Jan 29 12:27:03.193070 update_engine[1810]: E20250129 12:27:03.192495 1810 update_attempter.cc:619] Update failed. Jan 29 12:27:03.193070 update_engine[1810]: I20250129 12:27:03.192512 1810 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 12:27:03.193070 update_engine[1810]: I20250129 12:27:03.192528 1810 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 12:27:03.193070 update_engine[1810]: I20250129 12:27:03.192584 1810 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 12:27:03.193070 update_engine[1810]: I20250129 12:27:03.192739 1810 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:27:03.193070 update_engine[1810]: I20250129 12:27:03.192800 1810 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:27:03.193070 update_engine[1810]: I20250129 12:27:03.192819 1810 omaha_request_action.cc:272] Request: Jan 29 12:27:03.193070 update_engine[1810]: Jan 29 12:27:03.193070 update_engine[1810]: Jan 29 12:27:03.193070 update_engine[1810]: Jan 29 12:27:03.193070 update_engine[1810]: Jan 29 12:27:03.193070 update_engine[1810]: Jan 29 12:27:03.193070 update_engine[1810]: Jan 29 12:27:03.193070 update_engine[1810]: I20250129 12:27:03.192835 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:27:03.194273 update_engine[1810]: I20250129 12:27:03.193234 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:27:03.194273 update_engine[1810]: I20250129 12:27:03.193692 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:27:03.194460 locksmithd[1860]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 12:27:03.195123 update_engine[1810]: E20250129 12:27:03.194597 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:27:03.195123 update_engine[1810]: I20250129 12:27:03.194729 1810 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:27:03.195123 update_engine[1810]: I20250129 12:27:03.194758 1810 omaha_request_action.cc:617] Omaha request response: Jan 29 12:27:03.195123 update_engine[1810]: I20250129 12:27:03.194777 1810 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:27:03.195123 update_engine[1810]: I20250129 12:27:03.194793 1810 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:27:03.195123 update_engine[1810]: I20250129 12:27:03.194808 1810 update_attempter.cc:306] Processing Done. Jan 29 12:27:03.195123 update_engine[1810]: I20250129 12:27:03.194825 1810 update_attempter.cc:310] Error event sent. Jan 29 12:27:03.195123 update_engine[1810]: I20250129 12:27:03.194849 1810 update_check_scheduler.cc:74] Next update check in 45m8s Jan 29 12:27:03.195877 locksmithd[1860]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 12:29:23.932351 systemd[1]: Started sshd@9-139.178.70.85:22-139.178.89.65:33914.service - OpenSSH per-connection server daemon (139.178.89.65:33914). Jan 29 12:29:23.971033 sshd[7374]: Accepted publickey for core from 139.178.89.65 port 33914 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:23.971766 sshd[7374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:23.974367 systemd-logind[1805]: New session 12 of user core. Jan 29 12:29:23.987860 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:29:24.076054 sshd[7374]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:24.077760 systemd[1]: sshd@9-139.178.70.85:22-139.178.89.65:33914.service: Deactivated successfully. Jan 29 12:29:24.078658 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:29:24.079113 systemd-logind[1805]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:29:24.079829 systemd-logind[1805]: Removed session 12. Jan 29 12:29:29.117966 systemd[1]: Started sshd@10-139.178.70.85:22-139.178.89.65:33922.service - OpenSSH per-connection server daemon (139.178.89.65:33922). Jan 29 12:29:29.174955 sshd[7404]: Accepted publickey for core from 139.178.89.65 port 33922 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:29.176108 sshd[7404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:29.180004 systemd-logind[1805]: New session 13 of user core. Jan 29 12:29:29.192765 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:29:29.333526 sshd[7404]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:29.335695 systemd[1]: sshd@10-139.178.70.85:22-139.178.89.65:33922.service: Deactivated successfully. Jan 29 12:29:29.336671 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:29:29.337131 systemd-logind[1805]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:29:29.337680 systemd-logind[1805]: Removed session 13. Jan 29 12:29:34.370063 systemd[1]: Started sshd@11-139.178.70.85:22-139.178.89.65:56526.service - OpenSSH per-connection server daemon (139.178.89.65:56526). Jan 29 12:29:34.399786 sshd[7434]: Accepted publickey for core from 139.178.89.65 port 56526 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:34.400481 sshd[7434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:34.402906 systemd-logind[1805]: New session 14 of user core. Jan 29 12:29:34.418821 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:29:34.503993 sshd[7434]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:34.521570 systemd[1]: sshd@11-139.178.70.85:22-139.178.89.65:56526.service: Deactivated successfully. Jan 29 12:29:34.522505 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:29:34.523352 systemd-logind[1805]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:29:34.524194 systemd[1]: Started sshd@12-139.178.70.85:22-139.178.89.65:56534.service - OpenSSH per-connection server daemon (139.178.89.65:56534). Jan 29 12:29:34.524867 systemd-logind[1805]: Removed session 14. Jan 29 12:29:34.559908 sshd[7460]: Accepted publickey for core from 139.178.89.65 port 56534 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:34.560819 sshd[7460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:34.564121 systemd-logind[1805]: New session 15 of user core. Jan 29 12:29:34.580924 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:29:34.714397 sshd[7460]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:34.724363 systemd[1]: sshd@12-139.178.70.85:22-139.178.89.65:56534.service: Deactivated successfully. Jan 29 12:29:34.725210 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:29:34.725948 systemd-logind[1805]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:29:34.726499 systemd[1]: Started sshd@13-139.178.70.85:22-139.178.89.65:56536.service - OpenSSH per-connection server daemon (139.178.89.65:56536). Jan 29 12:29:34.727100 systemd-logind[1805]: Removed session 15. Jan 29 12:29:34.757563 sshd[7485]: Accepted publickey for core from 139.178.89.65 port 56536 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:34.758284 sshd[7485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:34.760927 systemd-logind[1805]: New session 16 of user core. Jan 29 12:29:34.779833 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:29:34.925657 sshd[7485]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:34.927690 systemd[1]: sshd@13-139.178.70.85:22-139.178.89.65:56536.service: Deactivated successfully. Jan 29 12:29:34.928568 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:29:34.928967 systemd-logind[1805]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:29:34.929471 systemd-logind[1805]: Removed session 16. Jan 29 12:29:39.943065 systemd[1]: Started sshd@14-139.178.70.85:22-139.178.89.65:56544.service - OpenSSH per-connection server daemon (139.178.89.65:56544). Jan 29 12:29:39.975165 sshd[7517]: Accepted publickey for core from 139.178.89.65 port 56544 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:39.978619 sshd[7517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:39.989902 systemd-logind[1805]: New session 17 of user core. Jan 29 12:29:40.006011 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:29:40.094565 sshd[7517]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:40.096675 systemd[1]: sshd@14-139.178.70.85:22-139.178.89.65:56544.service: Deactivated successfully. Jan 29 12:29:40.097606 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:29:40.098042 systemd-logind[1805]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:29:40.098510 systemd-logind[1805]: Removed session 17. Jan 29 12:29:45.118029 systemd[1]: Started sshd@15-139.178.70.85:22-139.178.89.65:41394.service - OpenSSH per-connection server daemon (139.178.89.65:41394). Jan 29 12:29:45.181763 sshd[7593]: Accepted publickey for core from 139.178.89.65 port 41394 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:45.182408 sshd[7593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:45.185127 systemd-logind[1805]: New session 18 of user core. Jan 29 12:29:45.205042 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:29:45.300492 sshd[7593]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:45.302401 systemd[1]: sshd@15-139.178.70.85:22-139.178.89.65:41394.service: Deactivated successfully. Jan 29 12:29:45.303457 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:29:45.304299 systemd-logind[1805]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:29:45.305092 systemd-logind[1805]: Removed session 18. Jan 29 12:29:50.328994 systemd[1]: Started sshd@16-139.178.70.85:22-139.178.89.65:41404.service - OpenSSH per-connection server daemon (139.178.89.65:41404). Jan 29 12:29:50.363793 sshd[7619]: Accepted publickey for core from 139.178.89.65 port 41404 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:50.364676 sshd[7619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:50.367330 systemd-logind[1805]: New session 19 of user core. Jan 29 12:29:50.384880 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:29:50.507242 sshd[7619]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:50.508739 systemd[1]: sshd@16-139.178.70.85:22-139.178.89.65:41404.service: Deactivated successfully. Jan 29 12:29:50.509642 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:29:50.510369 systemd-logind[1805]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:29:50.511114 systemd-logind[1805]: Removed session 19. Jan 29 12:29:55.527412 systemd[1]: Started sshd@17-139.178.70.85:22-139.178.89.65:41000.service - OpenSSH per-connection server daemon (139.178.89.65:41000). Jan 29 12:29:55.562263 sshd[7646]: Accepted publickey for core from 139.178.89.65 port 41000 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:55.562940 sshd[7646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:55.565516 systemd-logind[1805]: New session 20 of user core. Jan 29 12:29:55.574876 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:29:55.659098 sshd[7646]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:55.671248 systemd[1]: sshd@17-139.178.70.85:22-139.178.89.65:41000.service: Deactivated successfully. Jan 29 12:29:55.672055 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:29:55.672809 systemd-logind[1805]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:29:55.673488 systemd[1]: Started sshd@18-139.178.70.85:22-139.178.89.65:41006.service - OpenSSH per-connection server daemon (139.178.89.65:41006). Jan 29 12:29:55.674035 systemd-logind[1805]: Removed session 20. Jan 29 12:29:55.705320 sshd[7672]: Accepted publickey for core from 139.178.89.65 port 41006 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:55.706006 sshd[7672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:55.708789 systemd-logind[1805]: New session 21 of user core. Jan 29 12:29:55.718824 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:29:55.852355 sshd[7672]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:55.882666 systemd[1]: sshd@18-139.178.70.85:22-139.178.89.65:41006.service: Deactivated successfully. Jan 29 12:29:55.883743 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:29:55.884784 systemd-logind[1805]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:29:55.885705 systemd[1]: Started sshd@19-139.178.70.85:22-139.178.89.65:41016.service - OpenSSH per-connection server daemon (139.178.89.65:41016). Jan 29 12:29:55.886482 systemd-logind[1805]: Removed session 21. Jan 29 12:29:55.933825 sshd[7698]: Accepted publickey for core from 139.178.89.65 port 41016 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:55.934997 sshd[7698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:55.938833 systemd-logind[1805]: New session 22 of user core. Jan 29 12:29:55.960755 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:29:57.156042 sshd[7698]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:57.168472 systemd[1]: sshd@19-139.178.70.85:22-139.178.89.65:41016.service: Deactivated successfully. Jan 29 12:29:57.169359 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:29:57.170053 systemd-logind[1805]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:29:57.170650 systemd[1]: Started sshd@20-139.178.70.85:22-139.178.89.65:41024.service - OpenSSH per-connection server daemon (139.178.89.65:41024). Jan 29 12:29:57.171188 systemd-logind[1805]: Removed session 22. Jan 29 12:29:57.202446 sshd[7728]: Accepted publickey for core from 139.178.89.65 port 41024 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:57.205726 sshd[7728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:57.216319 systemd-logind[1805]: New session 23 of user core. Jan 29 12:29:57.238863 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:29:57.420749 sshd[7728]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:57.449632 systemd[1]: sshd@20-139.178.70.85:22-139.178.89.65:41024.service: Deactivated successfully. Jan 29 12:29:57.451248 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:29:57.452638 systemd-logind[1805]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:29:57.454189 systemd[1]: Started sshd@21-139.178.70.85:22-139.178.89.65:41026.service - OpenSSH per-connection server daemon (139.178.89.65:41026). Jan 29 12:29:57.455447 systemd-logind[1805]: Removed session 23. Jan 29 12:29:57.543348 sshd[7754]: Accepted publickey for core from 139.178.89.65 port 41026 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:29:57.545067 sshd[7754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:29:57.550389 systemd-logind[1805]: New session 24 of user core. Jan 29 12:29:57.567882 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:29:57.712687 sshd[7754]: pam_unix(sshd:session): session closed for user core Jan 29 12:29:57.714391 systemd[1]: sshd@21-139.178.70.85:22-139.178.89.65:41026.service: Deactivated successfully. Jan 29 12:29:57.715373 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:29:57.716134 systemd-logind[1805]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:29:57.716746 systemd-logind[1805]: Removed session 24. Jan 29 12:30:02.742923 systemd[1]: Started sshd@22-139.178.70.85:22-139.178.89.65:51850.service - OpenSSH per-connection server daemon (139.178.89.65:51850). Jan 29 12:30:02.777930 sshd[7785]: Accepted publickey for core from 139.178.89.65 port 51850 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:30:02.778558 sshd[7785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:30:02.781192 systemd-logind[1805]: New session 25 of user core. Jan 29 12:30:02.798715 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:30:02.883627 sshd[7785]: pam_unix(sshd:session): session closed for user core Jan 29 12:30:02.885302 systemd[1]: sshd@22-139.178.70.85:22-139.178.89.65:51850.service: Deactivated successfully. Jan 29 12:30:02.886369 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:30:02.887183 systemd-logind[1805]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:30:02.888001 systemd-logind[1805]: Removed session 25. Jan 29 12:30:07.908707 systemd[1]: Started sshd@23-139.178.70.85:22-139.178.89.65:51864.service - OpenSSH per-connection server daemon (139.178.89.65:51864). Jan 29 12:30:07.938385 sshd[7850]: Accepted publickey for core from 139.178.89.65 port 51864 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:30:07.939081 sshd[7850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:30:07.941844 systemd-logind[1805]: New session 26 of user core. Jan 29 12:30:07.961733 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:30:08.050610 sshd[7850]: pam_unix(sshd:session): session closed for user core Jan 29 12:30:08.052256 systemd[1]: sshd@23-139.178.70.85:22-139.178.89.65:51864.service: Deactivated successfully. Jan 29 12:30:08.053224 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:30:08.053966 systemd-logind[1805]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:30:08.054476 systemd-logind[1805]: Removed session 26. Jan 29 12:30:13.082888 systemd[1]: Started sshd@24-139.178.70.85:22-139.178.89.65:33870.service - OpenSSH per-connection server daemon (139.178.89.65:33870). Jan 29 12:30:13.112604 sshd[7903]: Accepted publickey for core from 139.178.89.65 port 33870 ssh2: RSA SHA256:fgDbj4bpKQS6wm5InjL5kQGT1M6pbtsOX1hR7ztAC4o Jan 29 12:30:13.113308 sshd[7903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:30:13.116037 systemd-logind[1805]: New session 27 of user core. Jan 29 12:30:13.131047 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:30:13.222589 sshd[7903]: pam_unix(sshd:session): session closed for user core Jan 29 12:30:13.224150 systemd[1]: sshd@24-139.178.70.85:22-139.178.89.65:33870.service: Deactivated successfully. Jan 29 12:30:13.225137 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:30:13.225873 systemd-logind[1805]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:30:13.226453 systemd-logind[1805]: Removed session 27.