Jan 13 22:50:41.989760 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 13 22:50:41.989774 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 22:50:41.989780 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:50:41.989786 kernel: BIOS-provided physical RAM map: Jan 13 22:50:41.989790 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 22:50:41.989793 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 22:50:41.989798 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 22:50:41.989802 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 22:50:41.989806 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 22:50:41.989810 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2afff] usable Jan 13 22:50:41.989814 kernel: BIOS-e820: [mem 0x0000000081b2b000-0x0000000081b2bfff] ACPI NVS Jan 13 22:50:41.989819 kernel: BIOS-e820: [mem 0x0000000081b2c000-0x0000000081b2cfff] reserved Jan 13 22:50:41.989823 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x000000008afccfff] usable Jan 13 22:50:41.989827 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 13 22:50:41.989832 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 13 22:50:41.989837 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 13 22:50:41.989842 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 13 22:50:41.989847 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 13 22:50:41.989851 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 13 22:50:41.989856 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 22:50:41.989860 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 22:50:41.989865 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 22:50:41.989869 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 22:50:41.989874 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 22:50:41.989878 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 13 22:50:41.989883 kernel: NX (Execute Disable) protection: active Jan 13 22:50:41.989887 kernel: APIC: Static calls initialized Jan 13 22:50:41.989892 kernel: SMBIOS 3.2.1 present. Jan 13 22:50:41.989897 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 13 22:50:41.989902 kernel: tsc: Detected 3400.000 MHz processor Jan 13 22:50:41.989906 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 22:50:41.989911 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 22:50:41.989916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 22:50:41.989921 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 13 22:50:41.989925 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 22:50:41.989930 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 22:50:41.989935 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 13 22:50:41.989940 kernel: Using GB pages for direct mapping Jan 13 22:50:41.989945 kernel: ACPI: Early table checksum verification disabled Jan 13 22:50:41.989950 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 22:50:41.989957 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 22:50:41.989962 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 13 22:50:41.989967 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 22:50:41.989972 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 13 22:50:41.989978 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 13 22:50:41.989983 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 13 22:50:41.989988 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 22:50:41.989993 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 22:50:41.989998 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 22:50:41.990003 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 22:50:41.990008 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 22:50:41.990014 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 22:50:41.990019 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990024 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 22:50:41.990029 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 22:50:41.990033 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990038 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990043 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 22:50:41.990048 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 22:50:41.990053 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990059 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990064 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 22:50:41.990069 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 13 22:50:41.990074 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 22:50:41.990079 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 22:50:41.990084 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 22:50:41.990089 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 13 22:50:41.990094 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 22:50:41.990099 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 22:50:41.990104 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 22:50:41.990109 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 22:50:41.990114 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 22:50:41.990119 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 13 22:50:41.990124 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 13 22:50:41.990129 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 13 22:50:41.990134 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 13 22:50:41.990139 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 13 22:50:41.990145 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 13 22:50:41.990150 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 13 22:50:41.990155 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 13 22:50:41.990160 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 13 22:50:41.990165 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 13 22:50:41.990176 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 13 22:50:41.990182 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 13 22:50:41.990205 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 13 22:50:41.990210 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 13 22:50:41.990216 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 13 22:50:41.990234 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 13 22:50:41.990239 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 13 22:50:41.990244 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 13 22:50:41.990249 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 13 22:50:41.990254 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 13 22:50:41.990259 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 13 22:50:41.990264 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 13 22:50:41.990269 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 13 22:50:41.990274 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 13 22:50:41.990279 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 13 22:50:41.990284 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 13 22:50:41.990289 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 13 22:50:41.990294 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 13 22:50:41.990299 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 13 22:50:41.990304 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 13 22:50:41.990309 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 13 22:50:41.990314 kernel: No NUMA configuration found Jan 13 22:50:41.990319 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 13 22:50:41.990325 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 13 22:50:41.990330 kernel: Zone ranges: Jan 13 22:50:41.990335 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 22:50:41.990340 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 22:50:41.990345 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 13 22:50:41.990350 kernel: Movable zone start for each node Jan 13 22:50:41.990355 kernel: Early memory node ranges Jan 13 22:50:41.990359 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 22:50:41.990364 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 22:50:41.990370 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2afff] Jan 13 22:50:41.990375 kernel: node 0: [mem 0x0000000081b2d000-0x000000008afccfff] Jan 13 22:50:41.990380 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 13 22:50:41.990385 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 13 22:50:41.990394 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 13 22:50:41.990399 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 13 22:50:41.990405 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 22:50:41.990410 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 22:50:41.990416 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 22:50:41.990422 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 22:50:41.990427 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 13 22:50:41.990432 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 13 22:50:41.990437 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 13 22:50:41.990443 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 13 22:50:41.990448 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 22:50:41.990453 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 22:50:41.990459 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 22:50:41.990465 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 22:50:41.990470 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 22:50:41.990475 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 22:50:41.990481 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 22:50:41.990486 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 22:50:41.990491 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 22:50:41.990496 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 22:50:41.990502 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 22:50:41.990507 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 22:50:41.990513 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 22:50:41.990518 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 22:50:41.990524 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 22:50:41.990529 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 22:50:41.990534 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 22:50:41.990539 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 22:50:41.990545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 22:50:41.990550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 22:50:41.990555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 22:50:41.990561 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 22:50:41.990567 kernel: TSC deadline timer available Jan 13 22:50:41.990572 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 22:50:41.990577 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 13 22:50:41.990583 kernel: Booting paravirtualized kernel on bare hardware Jan 13 22:50:41.990588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 22:50:41.990594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 22:50:41.990599 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 22:50:41.990604 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 22:50:41.990609 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 22:50:41.990616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:50:41.990622 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 22:50:41.990627 kernel: random: crng init done Jan 13 22:50:41.990632 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 22:50:41.990638 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 22:50:41.990643 kernel: Fallback order for Node 0: 0 Jan 13 22:50:41.990648 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 13 22:50:41.990654 kernel: Policy zone: Normal Jan 13 22:50:41.990660 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 22:50:41.990665 kernel: software IO TLB: area num 16. Jan 13 22:50:41.990671 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 732412K reserved, 0K cma-reserved) Jan 13 22:50:41.990676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 22:50:41.990681 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 22:50:41.990687 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 22:50:41.990692 kernel: Dynamic Preempt: voluntary Jan 13 22:50:41.990698 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 22:50:41.990704 kernel: rcu: RCU event tracing is enabled. Jan 13 22:50:41.990710 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 22:50:41.990715 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 22:50:41.990720 kernel: Rude variant of Tasks RCU enabled. Jan 13 22:50:41.990726 kernel: Tracing variant of Tasks RCU enabled. Jan 13 22:50:41.990731 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 22:50:41.990736 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 22:50:41.990742 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 22:50:41.990747 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 22:50:41.990752 kernel: Console: colour dummy device 80x25 Jan 13 22:50:41.990758 kernel: printk: console [tty0] enabled Jan 13 22:50:41.990764 kernel: printk: console [ttyS1] enabled Jan 13 22:50:41.990769 kernel: ACPI: Core revision 20230628 Jan 13 22:50:41.990774 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 13 22:50:41.990780 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 22:50:41.990785 kernel: DMAR: Host address width 39 Jan 13 22:50:41.990790 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 22:50:41.990796 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 22:50:41.990801 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 13 22:50:41.990807 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 13 22:50:41.990813 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 22:50:41.990818 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 22:50:41.990823 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 22:50:41.990829 kernel: x2apic enabled Jan 13 22:50:41.990834 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 22:50:41.990839 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 22:50:41.990845 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 22:50:41.990850 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 22:50:41.990856 kernel: process: using mwait in idle threads Jan 13 22:50:41.990862 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 22:50:41.990867 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 22:50:41.990872 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 22:50:41.990877 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 22:50:41.990882 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 22:50:41.990888 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 22:50:41.990893 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 22:50:41.990898 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 22:50:41.990903 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 22:50:41.990908 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 22:50:41.990915 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 22:50:41.990920 kernel: TAA: Mitigation: TSX disabled Jan 13 22:50:41.990925 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 22:50:41.990931 kernel: SRBDS: Mitigation: Microcode Jan 13 22:50:41.990936 kernel: GDS: Mitigation: Microcode Jan 13 22:50:41.990941 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 22:50:41.990946 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 22:50:41.990952 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 22:50:41.990957 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 22:50:41.990962 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 22:50:41.990967 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 22:50:41.990973 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 22:50:41.990979 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 22:50:41.990984 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 22:50:41.990989 kernel: Freeing SMP alternatives memory: 32K Jan 13 22:50:41.990994 kernel: pid_max: default: 32768 minimum: 301 Jan 13 22:50:41.991000 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 22:50:41.991005 kernel: landlock: Up and running. Jan 13 22:50:41.991010 kernel: SELinux: Initializing. Jan 13 22:50:41.991016 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.991021 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.991026 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 22:50:41.991032 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:50:41.991038 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:50:41.991043 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:50:41.991049 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 22:50:41.991054 kernel: ... version: 4 Jan 13 22:50:41.991059 kernel: ... bit width: 48 Jan 13 22:50:41.991064 kernel: ... generic registers: 4 Jan 13 22:50:41.991070 kernel: ... value mask: 0000ffffffffffff Jan 13 22:50:41.991075 kernel: ... max period: 00007fffffffffff Jan 13 22:50:41.991081 kernel: ... fixed-purpose events: 3 Jan 13 22:50:41.991087 kernel: ... event mask: 000000070000000f Jan 13 22:50:41.991092 kernel: signal: max sigframe size: 2032 Jan 13 22:50:41.991097 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 22:50:41.991103 kernel: rcu: Hierarchical SRCU implementation. Jan 13 22:50:41.991108 kernel: rcu: Max phase no-delay instances is 400. Jan 13 22:50:41.991113 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 22:50:41.991119 kernel: smp: Bringing up secondary CPUs ... Jan 13 22:50:41.991124 kernel: smpboot: x86: Booting SMP configuration: Jan 13 22:50:41.991130 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 22:50:41.991136 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 22:50:41.991141 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 22:50:41.991147 kernel: smpboot: Max logical packages: 1 Jan 13 22:50:41.991152 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 22:50:41.991157 kernel: devtmpfs: initialized Jan 13 22:50:41.991163 kernel: x86/mm: Memory block size: 128MB Jan 13 22:50:41.991170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2b000-0x81b2bfff] (4096 bytes) Jan 13 22:50:41.991175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 13 22:50:41.991182 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 22:50:41.991207 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 22:50:41.991212 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 22:50:41.991218 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 22:50:41.991237 kernel: audit: initializing netlink subsys (disabled) Jan 13 22:50:41.991242 kernel: audit: type=2000 audit(1736808636.039:1): state=initialized audit_enabled=0 res=1 Jan 13 22:50:41.991247 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 22:50:41.991253 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 22:50:41.991258 kernel: cpuidle: using governor menu Jan 13 22:50:41.991264 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 22:50:41.991269 kernel: dca service started, version 1.12.1 Jan 13 22:50:41.991275 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 22:50:41.991280 kernel: PCI: Using configuration type 1 for base access Jan 13 22:50:41.991285 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 22:50:41.991290 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 22:50:41.991296 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 22:50:41.991301 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 22:50:41.991307 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 22:50:41.991313 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 22:50:41.991318 kernel: ACPI: Added _OSI(Module Device) Jan 13 22:50:41.991323 kernel: ACPI: Added _OSI(Processor Device) Jan 13 22:50:41.991329 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 22:50:41.991334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 22:50:41.991339 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 22:50:41.991345 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991350 kernel: ACPI: SSDT 0xFFFF965BC1EC6800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 22:50:41.991355 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991362 kernel: ACPI: SSDT 0xFFFF965BC1EBF000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 22:50:41.991367 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991372 kernel: ACPI: SSDT 0xFFFF965BC1568700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 22:50:41.991377 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991382 kernel: ACPI: SSDT 0xFFFF965BC1EBC800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 22:50:41.991388 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991393 kernel: ACPI: SSDT 0xFFFF965BC1ECA000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 22:50:41.991398 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991404 kernel: ACPI: SSDT 0xFFFF965BC0E3B000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 22:50:41.991410 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 22:50:41.991415 kernel: ACPI: Interpreter enabled Jan 13 22:50:41.991420 kernel: ACPI: PM: (supports S0 S5) Jan 13 22:50:41.991426 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 22:50:41.991431 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 22:50:41.991436 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 22:50:41.991441 kernel: HEST: Table parsing has been initialized. Jan 13 22:50:41.991447 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 22:50:41.991452 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 22:50:41.991458 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 22:50:41.991464 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 22:50:41.991469 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 22:50:41.991474 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 22:50:41.991480 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 22:50:41.991485 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 22:50:41.991490 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 22:50:41.991496 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 22:50:41.991501 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 22:50:41.991508 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 22:50:41.991513 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 22:50:41.991518 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 22:50:41.991523 kernel: ACPI: \PIN_: New power resource Jan 13 22:50:41.991529 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 22:50:41.991598 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 22:50:41.991650 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 22:50:41.991696 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 22:50:41.991705 kernel: PCI host bridge to bus 0000:00 Jan 13 22:50:41.991754 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 22:50:41.991796 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 22:50:41.991836 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 22:50:41.991877 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 13 22:50:41.991917 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 22:50:41.991957 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 22:50:41.992014 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 22:50:41.992069 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 22:50:41.992118 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.992170 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 22:50:41.992250 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 13 22:50:41.992300 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 22:50:41.992350 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 13 22:50:41.992400 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 22:50:41.992447 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 13 22:50:41.992494 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 22:50:41.992544 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 22:50:41.992590 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 13 22:50:41.992637 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 13 22:50:41.992688 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 22:50:41.992734 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:50:41.992786 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 22:50:41.992831 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:50:41.992881 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 22:50:41.992928 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 13 22:50:41.992977 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 22:50:41.993032 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 22:50:41.993080 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 13 22:50:41.993125 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 22:50:41.993193 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 22:50:41.993256 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 13 22:50:41.993304 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 22:50:41.993356 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 22:50:41.993403 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 13 22:50:41.993449 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 13 22:50:41.993494 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 13 22:50:41.993540 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 13 22:50:41.993585 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 13 22:50:41.993634 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 13 22:50:41.993679 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 22:50:41.993730 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 22:50:41.993777 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.993832 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 22:50:41.993878 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.993929 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 22:50:41.993975 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.994027 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 22:50:41.994076 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.994126 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 13 22:50:41.994195 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.994267 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 22:50:41.994314 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:50:41.994363 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 22:50:41.994413 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 22:50:41.994461 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 13 22:50:41.994509 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 22:50:41.994561 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 22:50:41.994609 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 22:50:41.994661 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 22:50:41.994710 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 22:50:41.994760 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 13 22:50:41.994807 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 13 22:50:41.994855 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:50:41.994902 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:50:41.994956 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 22:50:41.995003 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 22:50:41.995051 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 13 22:50:41.995100 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 13 22:50:41.995148 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:50:41.995222 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:50:41.995285 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:50:41.995332 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 22:50:41.995378 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:50:41.995426 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 22:50:41.995477 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 13 22:50:41.995528 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:50:41.995575 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 13 22:50:41.995623 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 22:50:41.995671 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 13 22:50:41.995719 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.995766 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 22:50:41.995813 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:50:41.995861 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 22:50:41.995912 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 22:50:41.995961 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:50:41.996008 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 13 22:50:41.996057 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 22:50:41.996103 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 13 22:50:41.996151 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.996223 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 22:50:41.996283 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:50:41.996330 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 22:50:41.996376 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 22:50:41.996428 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 22:50:41.996476 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 13 22:50:41.996525 kernel: pci 0000:06:00.0: supports D1 D2 Jan 13 22:50:41.996571 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:50:41.996623 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 22:50:41.996670 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.996719 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.996772 kernel: pci_bus 0000:07: extended config space not accessible Jan 13 22:50:41.996826 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 22:50:41.996876 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 13 22:50:41.996926 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 13 22:50:41.996978 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 22:50:41.997028 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 22:50:41.997077 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 22:50:41.997126 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:50:41.997176 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 22:50:41.997270 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.997317 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.997327 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 22:50:41.997333 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 22:50:41.997339 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 22:50:41.997344 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 22:50:41.997350 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 22:50:41.997356 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 22:50:41.997361 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 22:50:41.997367 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 22:50:41.997373 kernel: iommu: Default domain type: Translated Jan 13 22:50:41.997379 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 22:50:41.997385 kernel: PCI: Using ACPI for IRQ routing Jan 13 22:50:41.997391 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 22:50:41.997396 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 22:50:41.997402 kernel: e820: reserve RAM buffer [mem 0x81b2b000-0x83ffffff] Jan 13 22:50:41.997407 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 13 22:50:41.997413 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 13 22:50:41.997418 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 13 22:50:41.997424 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 13 22:50:41.997475 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 13 22:50:41.997524 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 13 22:50:41.997573 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 22:50:41.997581 kernel: vgaarb: loaded Jan 13 22:50:41.997587 kernel: clocksource: Switched to clocksource tsc-early Jan 13 22:50:41.997593 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 22:50:41.997599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 22:50:41.997604 kernel: pnp: PnP ACPI init Jan 13 22:50:41.997652 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 22:50:41.997701 kernel: pnp 00:02: [dma 0 disabled] Jan 13 22:50:41.997747 kernel: pnp 00:03: [dma 0 disabled] Jan 13 22:50:41.997795 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 22:50:41.997839 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 22:50:41.997884 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 22:50:41.997930 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 22:50:41.997974 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 22:50:41.998017 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 22:50:41.998059 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 22:50:41.998103 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 22:50:41.998146 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 22:50:41.998215 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 22:50:41.998278 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 22:50:41.998325 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 22:50:41.998368 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 22:50:41.998410 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 22:50:41.998451 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 22:50:41.998492 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 22:50:41.998534 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 22:50:41.998578 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 22:50:41.998623 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 22:50:41.998631 kernel: pnp: PnP ACPI: found 10 devices Jan 13 22:50:41.998637 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 22:50:41.998643 kernel: NET: Registered PF_INET protocol family Jan 13 22:50:41.998649 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:50:41.998655 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 22:50:41.998662 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 22:50:41.998668 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:50:41.998675 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 22:50:41.998680 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 22:50:41.998686 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.998692 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.998697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 22:50:41.998703 kernel: NET: Registered PF_XDP protocol family Jan 13 22:50:41.998749 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 13 22:50:41.998796 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 13 22:50:41.998844 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 13 22:50:41.998894 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:50:41.998944 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:50:41.998991 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:50:41.999038 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:50:41.999085 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:50:41.999131 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 22:50:41.999202 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:50:41.999272 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 22:50:41.999317 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 22:50:41.999364 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:50:41.999409 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 22:50:41.999455 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 22:50:41.999504 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:50:41.999550 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 22:50:41.999597 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 22:50:41.999643 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 22:50:41.999691 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.999737 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.999784 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 22:50:41.999830 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.999877 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.999921 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 22:50:41.999963 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 22:50:42.000004 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 22:50:42.000045 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 22:50:42.000085 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 13 22:50:42.000125 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 22:50:42.000174 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 13 22:50:42.000262 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:50:42.000312 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 13 22:50:42.000356 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 13 22:50:42.000404 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 22:50:42.000446 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 13 22:50:42.000493 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 13 22:50:42.000537 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 13 22:50:42.000582 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 22:50:42.000626 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 13 22:50:42.000634 kernel: PCI: CLS 64 bytes, default 64 Jan 13 22:50:42.000639 kernel: DMAR: No ATSR found Jan 13 22:50:42.000645 kernel: DMAR: No SATC found Jan 13 22:50:42.000651 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 22:50:42.000697 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 13 22:50:42.000746 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 13 22:50:42.000793 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 13 22:50:42.000839 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 13 22:50:42.000886 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 13 22:50:42.000931 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 13 22:50:42.000977 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 13 22:50:42.001022 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 13 22:50:42.001068 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 13 22:50:42.001114 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 13 22:50:42.001161 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 13 22:50:42.001253 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 13 22:50:42.001298 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 13 22:50:42.001345 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 13 22:50:42.001390 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 13 22:50:42.001437 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 13 22:50:42.001482 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 13 22:50:42.001529 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 13 22:50:42.001576 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 13 22:50:42.001623 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 13 22:50:42.001668 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 13 22:50:42.001715 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 13 22:50:42.001762 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 13 22:50:42.001809 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 13 22:50:42.001857 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 22:50:42.001904 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 13 22:50:42.001955 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 13 22:50:42.001964 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 22:50:42.001969 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 22:50:42.001975 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 13 22:50:42.001981 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 13 22:50:42.001987 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 22:50:42.001992 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 22:50:42.001998 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 22:50:42.002047 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 22:50:42.002057 kernel: Initialise system trusted keyrings Jan 13 22:50:42.002063 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 22:50:42.002069 kernel: Key type asymmetric registered Jan 13 22:50:42.002074 kernel: Asymmetric key parser 'x509' registered Jan 13 22:50:42.002080 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 22:50:42.002085 kernel: io scheduler mq-deadline registered Jan 13 22:50:42.002091 kernel: io scheduler kyber registered Jan 13 22:50:42.002097 kernel: io scheduler bfq registered Jan 13 22:50:42.002144 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 13 22:50:42.002236 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 13 22:50:42.002284 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 13 22:50:42.002329 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 13 22:50:42.002375 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 13 22:50:42.002422 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 13 22:50:42.002471 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 22:50:42.002481 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 22:50:42.002487 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 22:50:42.002493 kernel: pstore: Using crash dump compression: deflate Jan 13 22:50:42.002499 kernel: pstore: Registered erst as persistent store backend Jan 13 22:50:42.002505 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 22:50:42.002510 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 22:50:42.002516 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 22:50:42.002522 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 22:50:42.002527 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 13 22:50:42.002577 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 22:50:42.002586 kernel: i8042: PNP: No PS/2 controller found. Jan 13 22:50:42.002628 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 22:50:42.002671 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 22:50:42.002713 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T22:50:40 UTC (1736808640) Jan 13 22:50:42.002755 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 22:50:42.002763 kernel: intel_pstate: Intel P-state driver initializing Jan 13 22:50:42.002771 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 22:50:42.002776 kernel: intel_pstate: HWP enabled Jan 13 22:50:42.002782 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 13 22:50:42.002788 kernel: vesafb: scrolling: redraw Jan 13 22:50:42.002793 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 13 22:50:42.002799 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000089463221, using 768k, total 768k Jan 13 22:50:42.002805 kernel: Console: switching to colour frame buffer device 128x48 Jan 13 22:50:42.002810 kernel: fb0: VESA VGA frame buffer device Jan 13 22:50:42.002816 kernel: NET: Registered PF_INET6 protocol family Jan 13 22:50:42.002822 kernel: Segment Routing with IPv6 Jan 13 22:50:42.002829 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 22:50:42.002834 kernel: NET: Registered PF_PACKET protocol family Jan 13 22:50:42.002840 kernel: Key type dns_resolver registered Jan 13 22:50:42.002845 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 22:50:42.002851 kernel: IPI shorthand broadcast: enabled Jan 13 22:50:42.002857 kernel: sched_clock: Marking stable (2476000680, 1385589175)->(4405521419, -543931564) Jan 13 22:50:42.002863 kernel: registered taskstats version 1 Jan 13 22:50:42.002868 kernel: Loading compiled-in X.509 certificates Jan 13 22:50:42.002874 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 22:50:42.002880 kernel: Key type .fscrypt registered Jan 13 22:50:42.002886 kernel: Key type fscrypt-provisioning registered Jan 13 22:50:42.002892 kernel: ima: Allocated hash algorithm: sha1 Jan 13 22:50:42.002897 kernel: ima: No architecture policies found Jan 13 22:50:42.002903 kernel: clk: Disabling unused clocks Jan 13 22:50:42.002909 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 22:50:42.002914 kernel: Write protecting the kernel read-only data: 36864k Jan 13 22:50:42.002920 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 22:50:42.002927 kernel: Run /init as init process Jan 13 22:50:42.002932 kernel: with arguments: Jan 13 22:50:42.002938 kernel: /init Jan 13 22:50:42.002943 kernel: with environment: Jan 13 22:50:42.002949 kernel: HOME=/ Jan 13 22:50:42.002954 kernel: TERM=linux Jan 13 22:50:42.002960 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 22:50:42.002967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:50:42.002975 systemd[1]: Detected architecture x86-64. Jan 13 22:50:42.002981 systemd[1]: Running in initrd. Jan 13 22:50:42.002987 systemd[1]: No hostname configured, using default hostname. Jan 13 22:50:42.002993 systemd[1]: Hostname set to . Jan 13 22:50:42.002998 systemd[1]: Initializing machine ID from random generator. Jan 13 22:50:42.003004 systemd[1]: Queued start job for default target initrd.target. Jan 13 22:50:42.003010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:50:42.003016 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:50:42.003023 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 22:50:42.003029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:50:42.003035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 22:50:42.003041 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 22:50:42.003048 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 22:50:42.003054 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 22:50:42.003060 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 13 22:50:42.003067 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 13 22:50:42.003072 kernel: clocksource: Switched to clocksource tsc Jan 13 22:50:42.003078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:50:42.003084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:50:42.003090 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:50:42.003096 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:50:42.003102 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:50:42.003108 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:50:42.003115 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:50:42.003121 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:50:42.003127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 22:50:42.003133 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 22:50:42.003139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:50:42.003144 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:50:42.003150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:50:42.003156 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:50:42.003162 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 22:50:42.003171 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:50:42.003177 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 22:50:42.003183 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 22:50:42.003215 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:50:42.003251 systemd-journald[267]: Collecting audit messages is disabled. Jan 13 22:50:42.003266 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:50:42.003272 systemd-journald[267]: Journal started Jan 13 22:50:42.003285 systemd-journald[267]: Runtime Journal (/run/log/journal/bec9d4e0f2194ff59fcade315701280d) is 8.0M, max 639.9M, 631.9M free. Jan 13 22:50:42.017282 systemd-modules-load[268]: Inserted module 'overlay' Jan 13 22:50:42.046172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:50:42.089232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 22:50:42.089265 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:50:42.108682 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 22:50:42.126105 kernel: Bridge firewalling registered Jan 13 22:50:42.126087 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 13 22:50:42.126196 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:50:42.176591 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 22:50:42.185469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:50:42.202512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:50:42.239492 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:50:42.250785 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:50:42.251162 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:50:42.251580 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:50:42.256230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:50:42.256967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:50:42.257079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:50:42.257997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:50:42.259067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:50:42.263009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:50:42.265514 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:50:42.283148 systemd-resolved[299]: Positive Trust Anchors: Jan 13 22:50:42.283156 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:50:42.283206 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:50:42.285490 systemd-resolved[299]: Defaulting to hostname 'linux'. Jan 13 22:50:42.286439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:50:42.318780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:50:42.361457 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 22:50:42.453968 dracut-cmdline[309]: dracut-dracut-053 Jan 13 22:50:42.461377 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:50:42.665196 kernel: SCSI subsystem initialized Jan 13 22:50:42.688201 kernel: Loading iSCSI transport class v2.0-870. Jan 13 22:50:42.712226 kernel: iscsi: registered transport (tcp) Jan 13 22:50:42.743163 kernel: iscsi: registered transport (qla4xxx) Jan 13 22:50:42.743184 kernel: QLogic iSCSI HBA Driver Jan 13 22:50:42.776506 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 22:50:42.802420 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 22:50:42.860147 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 22:50:42.860166 kernel: device-mapper: uevent: version 1.0.3 Jan 13 22:50:42.879995 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 22:50:42.937245 kernel: raid6: avx2x4 gen() 53085 MB/s Jan 13 22:50:42.969244 kernel: raid6: avx2x2 gen() 53707 MB/s Jan 13 22:50:43.005673 kernel: raid6: avx2x1 gen() 45083 MB/s Jan 13 22:50:43.005692 kernel: raid6: using algorithm avx2x2 gen() 53707 MB/s Jan 13 22:50:43.053730 kernel: raid6: .... xor() 31132 MB/s, rmw enabled Jan 13 22:50:43.053747 kernel: raid6: using avx2x2 recovery algorithm Jan 13 22:50:43.095203 kernel: xor: automatically using best checksumming function avx Jan 13 22:50:43.207181 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 22:50:43.213209 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:50:43.241489 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:50:43.248208 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jan 13 22:50:43.252250 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:50:43.275004 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 22:50:43.331978 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jan 13 22:50:43.347916 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:50:43.383547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:50:43.440928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:50:43.476510 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 22:50:43.476543 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 22:50:43.486823 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 22:50:43.507236 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 22:50:43.489283 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:50:43.489380 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:50:43.508171 kernel: libata version 3.00 loaded. Jan 13 22:50:43.532021 kernel: ACPI: bus type USB registered Jan 13 22:50:43.532046 kernel: usbcore: registered new interface driver usbfs Jan 13 22:50:43.532059 kernel: PTP clock support registered Jan 13 22:50:43.532069 kernel: usbcore: registered new interface driver hub Jan 13 22:50:43.556558 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:50:43.618236 kernel: usbcore: registered new device driver usb Jan 13 22:50:43.618251 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 22:50:43.618259 kernel: AES CTR mode by8 optimization enabled Jan 13 22:50:43.618266 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 22:50:43.999081 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:50:43.999174 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 13 22:50:43.999242 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 22:50:43.999305 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 22:50:43.999365 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 22:50:43.999425 kernel: scsi host0: ahci Jan 13 22:50:43.999488 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:50:43.999550 kernel: scsi host1: ahci Jan 13 22:50:43.999614 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 22:50:43.999675 kernel: scsi host2: ahci Jan 13 22:50:43.999734 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 22:50:43.999793 kernel: scsi host3: ahci Jan 13 22:50:43.999851 kernel: hub 1-0:1.0: USB hub found Jan 13 22:50:43.999914 kernel: scsi host4: ahci Jan 13 22:50:43.999972 kernel: hub 1-0:1.0: 16 ports detected Jan 13 22:50:44.000031 kernel: scsi host5: ahci Jan 13 22:50:44.000090 kernel: hub 2-0:1.0: USB hub found Jan 13 22:50:44.000158 kernel: scsi host6: ahci Jan 13 22:50:44.000233 kernel: hub 2-0:1.0: 10 ports detected Jan 13 22:50:44.000300 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 13 22:50:44.000311 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 22:50:44.000318 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 13 22:50:44.000325 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 22:50:44.000332 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 13 22:50:44.000339 kernel: pps pps0: new PPS source ptp0 Jan 13 22:50:44.000402 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 13 22:50:44.000410 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 13 22:50:44.100293 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 13 22:50:44.100306 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 22:50:44.150266 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:50:44.150343 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 13 22:50:44.150353 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c2 Jan 13 22:50:44.150418 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 13 22:50:44.150426 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 13 22:50:44.150489 kernel: hub 1-14:1.0: USB hub found Jan 13 22:50:44.150560 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:50:44.150625 kernel: pps pps1: new PPS source ptp1 Jan 13 22:50:44.150687 kernel: hub 1-14:1.0: 4 ports detected Jan 13 22:50:44.150749 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 13 22:50:44.286544 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:50:44.286618 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c3 Jan 13 22:50:44.286684 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 13 22:50:44.286750 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:50:43.590580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:50:44.498264 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498353 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:50:44.498361 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498368 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498378 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498385 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:50:44.498392 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:50:44.498400 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498409 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:50:44.498416 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 22:50:44.498435 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:50:44.498443 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:50:43.590725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:50:44.563696 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 13 22:50:44.983319 kernel: ata1.00: Features: NCQ-prio Jan 13 22:50:44.983330 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:50:44.983406 kernel: ata2.00: Features: NCQ-prio Jan 13 22:50:44.983415 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 22:50:44.983423 kernel: ata1.00: configured for UDMA/133 Jan 13 22:50:44.983430 kernel: ata2.00: configured for UDMA/133 Jan 13 22:50:44.983437 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:50:45.297665 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:50:45.297745 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 13 22:50:45.297816 kernel: usbcore: registered new interface driver usbhid Jan 13 22:50:45.297825 kernel: usbhid: USB HID core driver Jan 13 22:50:45.297832 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 22:50:45.297840 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 13 22:50:45.297903 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.297913 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 22:50:45.297985 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:50:45.297993 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:50:45.298052 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:50:45.298110 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 13 22:50:45.298165 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 13 22:50:45.298259 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 22:50:45.298317 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:50:45.298374 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 22:50:45.298430 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:50:45.298438 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 13 22:50:45.298493 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:50:45.298555 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 13 22:50:45.298615 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 22:50:45.298624 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 13 22:50:45.298682 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 22:50:45.298750 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 13 22:50:45.298807 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:50:45.298867 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 22:50:45.298924 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:50:45.298980 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 13 22:50:45.583091 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 22:50:45.583194 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:50:45.583281 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.583290 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 22:50:45.583298 kernel: GPT:9289727 != 937703087 Jan 13 22:50:45.583305 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 22:50:45.583312 kernel: GPT:9289727 != 937703087 Jan 13 22:50:45.583318 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 22:50:45.583325 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:45.583334 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 13 22:50:45.583396 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:50:45.583459 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 13 22:50:45.583519 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (567) Jan 13 22:50:45.583528 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (574) Jan 13 22:50:45.583535 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.583542 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:45.583549 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:50:45.583611 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.583618 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:43.668267 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:50:45.615294 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:44.451356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:50:45.636245 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:44.531631 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 22:50:45.657277 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 13 22:50:44.565709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:50:44.607765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:50:45.702280 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 13 22:50:45.702404 disk-uuid[712]: Primary Header is updated. Jan 13 22:50:45.702404 disk-uuid[712]: Secondary Entries is updated. Jan 13 22:50:45.702404 disk-uuid[712]: Secondary Header is updated. Jan 13 22:50:44.627446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:50:44.668334 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 22:50:44.721623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:50:44.761356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:50:45.332320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:50:45.351057 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 22:50:45.399419 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:50:45.433000 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 22:50:45.448291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 22:50:45.464581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:50:45.476294 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:50:45.497492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 22:50:46.620624 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:46.640058 disk-uuid[713]: The operation has completed successfully. Jan 13 22:50:46.649416 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:46.675866 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 22:50:46.675913 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 22:50:46.716595 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 22:50:46.754283 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 22:50:46.754298 sh[735]: Success Jan 13 22:50:46.789951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 22:50:46.810039 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 22:50:46.819583 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 22:50:46.872139 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 22:50:46.872194 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:46.893251 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 22:50:46.912139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 22:50:46.929939 kernel: BTRFS info (device dm-0): using free space tree Jan 13 22:50:46.967175 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 22:50:46.970474 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 22:50:46.979701 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 22:50:46.989282 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 22:50:47.120526 kernel: BTRFS info (device sdb6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:47.120540 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:47.120547 kernel: BTRFS info (device sdb6): using free space tree Jan 13 22:50:47.120554 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 22:50:47.120562 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 22:50:47.120571 kernel: BTRFS info (device sdb6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:47.132497 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 22:50:47.132929 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 22:50:47.170368 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 22:50:47.185318 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:50:47.190363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:50:47.233959 ignition[877]: Ignition 2.19.0 Jan 13 22:50:47.233964 ignition[877]: Stage: fetch-offline Jan 13 22:50:47.236182 unknown[877]: fetched base config from "system" Jan 13 22:50:47.233988 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:47.236186 unknown[877]: fetched user config from "system" Jan 13 22:50:47.233993 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:47.237061 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:50:47.234052 ignition[877]: parsed url from cmdline: "" Jan 13 22:50:47.240158 systemd-networkd[918]: lo: Link UP Jan 13 22:50:47.234053 ignition[877]: no config URL provided Jan 13 22:50:47.240160 systemd-networkd[918]: lo: Gained carrier Jan 13 22:50:47.234056 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:50:47.242452 systemd-networkd[918]: Enumeration completed Jan 13 22:50:47.234079 ignition[877]: parsing config with SHA512: e1e4e4faaadf6e31531ab2a79780ff859a84f4f3d183a50e6f5da4051c7fa709de68b2a371ce7285312a9fba18e016bb27a68170aec49d49884c91338bbf886d Jan 13 22:50:47.242525 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:50:47.236398 ignition[877]: fetch-offline: fetch-offline passed Jan 13 22:50:47.243207 systemd-networkd[918]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.236401 ignition[877]: POST message to Packet Timeline Jan 13 22:50:47.254500 systemd[1]: Reached target network.target - Network. Jan 13 22:50:47.236403 ignition[877]: POST Status error: resource requires networking Jan 13 22:50:47.271192 systemd-networkd[918]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.236439 ignition[877]: Ignition finished successfully Jan 13 22:50:47.285513 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 22:50:47.303240 ignition[932]: Ignition 2.19.0 Jan 13 22:50:47.291417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 22:50:47.303246 ignition[932]: Stage: kargs Jan 13 22:50:47.299357 systemd-networkd[918]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.303413 ignition[932]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:47.303423 ignition[932]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:47.513331 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 22:50:47.504436 systemd-networkd[918]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.304228 ignition[932]: kargs: kargs passed Jan 13 22:50:47.304232 ignition[932]: POST message to Packet Timeline Jan 13 22:50:47.304244 ignition[932]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:47.304920 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51641->[::1]:53: read: connection refused Jan 13 22:50:47.505513 ignition[932]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 22:50:47.505981 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49991->[::1]:53: read: connection refused Jan 13 22:50:47.744288 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 22:50:47.745281 systemd-networkd[918]: eno1: Link UP Jan 13 22:50:47.745461 systemd-networkd[918]: eno2: Link UP Jan 13 22:50:47.745590 systemd-networkd[918]: enp1s0f0np0: Link UP Jan 13 22:50:47.745740 systemd-networkd[918]: enp1s0f0np0: Gained carrier Jan 13 22:50:47.756440 systemd-networkd[918]: enp1s0f1np1: Link UP Jan 13 22:50:47.795443 systemd-networkd[918]: enp1s0f0np0: DHCPv4 address 147.28.180.253/31, gateway 147.28.180.252 acquired from 145.40.83.140 Jan 13 22:50:47.906307 ignition[932]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 22:50:47.907357 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56441->[::1]:53: read: connection refused Jan 13 22:50:48.561948 systemd-networkd[918]: enp1s0f1np1: Gained carrier Jan 13 22:50:48.707805 ignition[932]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 22:50:48.708843 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34355->[::1]:53: read: connection refused Jan 13 22:50:48.753592 systemd-networkd[918]: enp1s0f0np0: Gained IPv6LL Jan 13 22:50:49.649770 systemd-networkd[918]: enp1s0f1np1: Gained IPv6LL Jan 13 22:50:50.309319 ignition[932]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 22:50:50.310418 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54516->[::1]:53: read: connection refused Jan 13 22:50:53.513160 ignition[932]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 22:50:54.157848 ignition[932]: GET result: OK Jan 13 22:50:54.515960 ignition[932]: Ignition finished successfully Jan 13 22:50:54.519099 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 22:50:54.548469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 22:50:54.556001 ignition[950]: Ignition 2.19.0 Jan 13 22:50:54.556005 ignition[950]: Stage: disks Jan 13 22:50:54.556105 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:54.556110 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:54.556660 ignition[950]: disks: disks passed Jan 13 22:50:54.556662 ignition[950]: POST message to Packet Timeline Jan 13 22:50:54.556670 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:55.339604 ignition[950]: GET result: OK Jan 13 22:50:55.686574 ignition[950]: Ignition finished successfully Jan 13 22:50:55.688449 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 22:50:55.705407 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 22:50:55.724592 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 22:50:55.734750 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:50:55.756736 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:50:55.784473 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:50:55.806468 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 22:50:55.845536 systemd-fsck[970]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 22:50:55.856612 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 22:50:55.875242 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 22:50:55.975173 kernel: EXT4-fs (sdb9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 22:50:55.975137 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 22:50:55.984667 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 22:50:56.019350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:50:56.028271 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 22:50:56.151253 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (980) Jan 13 22:50:56.151339 kernel: BTRFS info (device sdb6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:56.151347 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:56.151355 kernel: BTRFS info (device sdb6): using free space tree Jan 13 22:50:56.151362 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 22:50:56.151369 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 22:50:56.049119 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 22:50:56.162556 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 22:50:56.185278 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 22:50:56.231429 coreos-metadata[982]: Jan 13 22:50:56.207 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:50:56.185298 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:50:56.263331 coreos-metadata[998]: Jan 13 22:50:56.207 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:50:56.186308 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:50:56.212479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 22:50:56.301311 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 22:50:56.262494 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 22:50:56.321297 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Jan 13 22:50:56.331284 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 22:50:56.341270 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 22:50:56.348717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 22:50:56.385675 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 22:50:56.396419 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 22:50:56.448220 kernel: BTRFS info (device sdb6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:56.430661 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 22:50:56.456307 ignition[1100]: INFO : Ignition 2.19.0 Jan 13 22:50:56.456307 ignition[1100]: INFO : Stage: mount Jan 13 22:50:56.456307 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:56.456307 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:56.456307 ignition[1100]: INFO : mount: mount passed Jan 13 22:50:56.456307 ignition[1100]: INFO : POST message to Packet Timeline Jan 13 22:50:56.456307 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:56.457209 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 22:50:56.836785 coreos-metadata[982]: Jan 13 22:50:56.836 INFO Fetch successful Jan 13 22:50:56.907423 coreos-metadata[998]: Jan 13 22:50:56.907 INFO Fetch successful Jan 13 22:50:56.915274 coreos-metadata[982]: Jan 13 22:50:56.914 INFO wrote hostname ci-4081.3.0-a-66cd838664 to /sysroot/etc/hostname Jan 13 22:50:56.915756 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:50:56.938578 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 22:50:56.938621 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 22:50:57.143557 ignition[1100]: INFO : GET result: OK Jan 13 22:50:57.484066 ignition[1100]: INFO : Ignition finished successfully Jan 13 22:50:57.485419 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 22:50:57.517394 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 22:50:57.528317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:50:57.590798 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1126) Jan 13 22:50:57.590817 kernel: BTRFS info (device sdb6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:57.609820 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:57.626723 kernel: BTRFS info (device sdb6): using free space tree Jan 13 22:50:57.663331 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 22:50:57.663353 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 22:50:57.676055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:50:57.707797 ignition[1143]: INFO : Ignition 2.19.0 Jan 13 22:50:57.707797 ignition[1143]: INFO : Stage: files Jan 13 22:50:57.722408 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:57.722408 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:57.722408 ignition[1143]: DEBUG : files: compiled without relabeling support, skipping Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 22:50:57.722408 ignition[1143]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 22:50:57.711954 unknown[1143]: wrote ssh authorized keys file for user: core Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:58.103515 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 22:50:58.406737 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 22:50:59.239713 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:59.239713 ignition[1143]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: files passed Jan 13 22:50:59.270484 ignition[1143]: INFO : POST message to Packet Timeline Jan 13 22:50:59.270484 ignition[1143]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:59.872205 ignition[1143]: INFO : GET result: OK Jan 13 22:51:00.254615 ignition[1143]: INFO : Ignition finished successfully Jan 13 22:51:00.257699 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 22:51:00.288454 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 22:51:00.288856 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 22:51:00.306701 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 22:51:00.306767 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 22:51:00.339809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:51:00.359549 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 22:51:00.401440 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:51:00.401440 initrd-setup-root-after-ignition[1180]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:51:00.396620 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 22:51:00.452442 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:51:00.469268 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 22:51:00.469315 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 22:51:00.487560 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 22:51:00.508381 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 22:51:00.528487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 22:51:00.546605 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 22:51:00.614753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:51:00.649828 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 22:51:00.668839 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:51:00.672449 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:51:00.704576 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 22:51:00.723586 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 22:51:00.723744 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:51:00.751126 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 22:51:00.772894 systemd[1]: Stopped target basic.target - Basic System. Jan 13 22:51:00.791900 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 22:51:00.809877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:51:00.820055 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 22:51:00.851878 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 22:51:00.871885 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:51:00.893909 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 22:51:00.915910 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 22:51:00.926057 systemd[1]: Stopped target swap.target - Swaps. Jan 13 22:51:00.950750 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 22:51:00.951158 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:51:00.975983 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:51:00.996909 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:51:01.017761 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 22:51:01.018218 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:51:01.039787 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 22:51:01.040213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 22:51:01.070837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 22:51:01.071306 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:51:01.091089 systemd[1]: Stopped target paths.target - Path Units. Jan 13 22:51:01.109733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 22:51:01.110230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:51:01.120161 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 22:51:01.140053 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 22:51:01.164867 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 22:51:01.165191 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:51:01.185917 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 22:51:01.186249 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:51:01.208945 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 22:51:01.322328 ignition[1205]: INFO : Ignition 2.19.0 Jan 13 22:51:01.322328 ignition[1205]: INFO : Stage: umount Jan 13 22:51:01.322328 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:51:01.322328 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:51:01.322328 ignition[1205]: INFO : umount: umount passed Jan 13 22:51:01.322328 ignition[1205]: INFO : POST message to Packet Timeline Jan 13 22:51:01.322328 ignition[1205]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:51:01.209374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:51:01.227965 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 22:51:01.228383 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 22:51:01.245962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 22:51:01.246380 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:51:01.279468 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 22:51:01.294952 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 22:51:01.304644 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 22:51:01.305054 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:51:01.333461 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 22:51:01.333604 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:51:01.369636 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 22:51:01.369738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 22:51:01.384662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 22:51:01.448245 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 22:51:01.448315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 22:51:01.936925 ignition[1205]: INFO : GET result: OK Jan 13 22:51:02.711842 ignition[1205]: INFO : Ignition finished successfully Jan 13 22:51:02.715062 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 22:51:02.715372 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 22:51:02.733427 systemd[1]: Stopped target network.target - Network. Jan 13 22:51:02.749447 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 22:51:02.749642 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 22:51:02.767521 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 22:51:02.767655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 22:51:02.785590 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 22:51:02.785748 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 22:51:02.804568 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 22:51:02.804734 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 22:51:02.823571 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 22:51:02.823738 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 22:51:02.842994 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 22:51:02.858330 systemd-networkd[918]: enp1s0f1np1: DHCPv6 lease lost Jan 13 22:51:02.860642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 22:51:02.872405 systemd-networkd[918]: enp1s0f0np0: DHCPv6 lease lost Jan 13 22:51:02.879561 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 22:51:02.879968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 22:51:02.898861 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 22:51:02.899320 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 22:51:02.919032 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 22:51:02.919278 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:51:02.954421 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 22:51:02.977339 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 22:51:02.977402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:51:02.996467 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 22:51:02.996557 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:51:03.016559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 22:51:03.016718 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 22:51:03.034558 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 22:51:03.034726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:51:03.054792 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:51:03.077461 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 22:51:03.077833 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:51:03.110720 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 22:51:03.110763 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 22:51:03.134431 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 22:51:03.134474 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:51:03.154357 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 22:51:03.154423 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:51:03.195397 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 22:51:03.195571 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 22:51:03.226579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:51:03.226726 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:51:03.280468 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 22:51:03.281422 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 22:51:03.281448 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:51:03.326326 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 22:51:03.326375 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:51:03.347369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 22:51:03.567349 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jan 13 22:51:03.347460 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:51:03.368593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:51:03.368734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:51:03.390780 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 22:51:03.391042 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 22:51:03.408474 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 22:51:03.408812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 22:51:03.430578 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 22:51:03.459733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 22:51:03.500102 systemd[1]: Switching root. Jan 13 22:51:03.661374 systemd-journald[267]: Journal stopped Jan 13 22:50:41.989760 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 13 22:50:41.989774 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 22:50:41.989780 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:50:41.989786 kernel: BIOS-provided physical RAM map: Jan 13 22:50:41.989790 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 13 22:50:41.989793 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 13 22:50:41.989798 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 13 22:50:41.989802 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 13 22:50:41.989806 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 13 22:50:41.989810 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b2afff] usable Jan 13 22:50:41.989814 kernel: BIOS-e820: [mem 0x0000000081b2b000-0x0000000081b2bfff] ACPI NVS Jan 13 22:50:41.989819 kernel: BIOS-e820: [mem 0x0000000081b2c000-0x0000000081b2cfff] reserved Jan 13 22:50:41.989823 kernel: BIOS-e820: [mem 0x0000000081b2d000-0x000000008afccfff] usable Jan 13 22:50:41.989827 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 13 22:50:41.989832 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 13 22:50:41.989837 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 13 22:50:41.989842 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 13 22:50:41.989847 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 13 22:50:41.989851 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 13 22:50:41.989856 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 22:50:41.989860 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 13 22:50:41.989865 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 13 22:50:41.989869 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 13 22:50:41.989874 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 13 22:50:41.989878 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 13 22:50:41.989883 kernel: NX (Execute Disable) protection: active Jan 13 22:50:41.989887 kernel: APIC: Static calls initialized Jan 13 22:50:41.989892 kernel: SMBIOS 3.2.1 present. Jan 13 22:50:41.989897 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Jan 13 22:50:41.989902 kernel: tsc: Detected 3400.000 MHz processor Jan 13 22:50:41.989906 kernel: tsc: Detected 3399.906 MHz TSC Jan 13 22:50:41.989911 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 22:50:41.989916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 22:50:41.989921 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 13 22:50:41.989925 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 13 22:50:41.989930 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 22:50:41.989935 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 13 22:50:41.989940 kernel: Using GB pages for direct mapping Jan 13 22:50:41.989945 kernel: ACPI: Early table checksum verification disabled Jan 13 22:50:41.989950 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 13 22:50:41.989957 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 13 22:50:41.989962 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 13 22:50:41.989967 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 13 22:50:41.989972 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 13 22:50:41.989978 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 13 22:50:41.989983 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 13 22:50:41.989988 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 13 22:50:41.989993 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 13 22:50:41.989998 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 13 22:50:41.990003 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 13 22:50:41.990008 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 13 22:50:41.990014 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 13 22:50:41.990019 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990024 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 13 22:50:41.990029 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 13 22:50:41.990033 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990038 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990043 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 13 22:50:41.990048 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 13 22:50:41.990053 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990059 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 13 22:50:41.990064 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 13 22:50:41.990069 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 13 22:50:41.990074 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 13 22:50:41.990079 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 13 22:50:41.990084 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 13 22:50:41.990089 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 13 22:50:41.990094 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 13 22:50:41.990099 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 13 22:50:41.990104 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 13 22:50:41.990109 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 13 22:50:41.990114 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 13 22:50:41.990119 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 13 22:50:41.990124 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 13 22:50:41.990129 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 13 22:50:41.990134 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 13 22:50:41.990139 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 13 22:50:41.990145 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 13 22:50:41.990150 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 13 22:50:41.990155 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 13 22:50:41.990160 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 13 22:50:41.990165 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 13 22:50:41.990176 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 13 22:50:41.990182 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 13 22:50:41.990205 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 13 22:50:41.990210 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 13 22:50:41.990216 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 13 22:50:41.990234 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 13 22:50:41.990239 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 13 22:50:41.990244 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 13 22:50:41.990249 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 13 22:50:41.990254 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 13 22:50:41.990259 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 13 22:50:41.990264 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 13 22:50:41.990269 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 13 22:50:41.990274 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 13 22:50:41.990279 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 13 22:50:41.990284 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 13 22:50:41.990289 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 13 22:50:41.990294 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 13 22:50:41.990299 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 13 22:50:41.990304 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 13 22:50:41.990309 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 13 22:50:41.990314 kernel: No NUMA configuration found Jan 13 22:50:41.990319 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 13 22:50:41.990325 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 13 22:50:41.990330 kernel: Zone ranges: Jan 13 22:50:41.990335 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 22:50:41.990340 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 22:50:41.990345 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 13 22:50:41.990350 kernel: Movable zone start for each node Jan 13 22:50:41.990355 kernel: Early memory node ranges Jan 13 22:50:41.990359 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 13 22:50:41.990364 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 13 22:50:41.990370 kernel: node 0: [mem 0x0000000040400000-0x0000000081b2afff] Jan 13 22:50:41.990375 kernel: node 0: [mem 0x0000000081b2d000-0x000000008afccfff] Jan 13 22:50:41.990380 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 13 22:50:41.990385 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 13 22:50:41.990394 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 13 22:50:41.990399 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 13 22:50:41.990405 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 22:50:41.990410 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 13 22:50:41.990416 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 13 22:50:41.990422 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 13 22:50:41.990427 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 13 22:50:41.990432 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 13 22:50:41.990437 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 13 22:50:41.990443 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 13 22:50:41.990448 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 13 22:50:41.990453 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 13 22:50:41.990459 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 13 22:50:41.990465 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 13 22:50:41.990470 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 13 22:50:41.990475 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 13 22:50:41.990481 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 13 22:50:41.990486 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 13 22:50:41.990491 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 13 22:50:41.990496 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 13 22:50:41.990502 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 13 22:50:41.990507 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 13 22:50:41.990513 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 13 22:50:41.990518 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 13 22:50:41.990524 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 13 22:50:41.990529 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 13 22:50:41.990534 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 13 22:50:41.990539 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 13 22:50:41.990545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 22:50:41.990550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 22:50:41.990555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 22:50:41.990561 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 22:50:41.990567 kernel: TSC deadline timer available Jan 13 22:50:41.990572 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 13 22:50:41.990577 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 13 22:50:41.990583 kernel: Booting paravirtualized kernel on bare hardware Jan 13 22:50:41.990588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 22:50:41.990594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 22:50:41.990599 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 22:50:41.990604 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 22:50:41.990609 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 22:50:41.990616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:50:41.990622 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 22:50:41.990627 kernel: random: crng init done Jan 13 22:50:41.990632 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 13 22:50:41.990638 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 22:50:41.990643 kernel: Fallback order for Node 0: 0 Jan 13 22:50:41.990648 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 13 22:50:41.990654 kernel: Policy zone: Normal Jan 13 22:50:41.990660 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 22:50:41.990665 kernel: software IO TLB: area num 16. Jan 13 22:50:41.990671 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 732412K reserved, 0K cma-reserved) Jan 13 22:50:41.990676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 22:50:41.990681 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 22:50:41.990687 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 22:50:41.990692 kernel: Dynamic Preempt: voluntary Jan 13 22:50:41.990698 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 22:50:41.990704 kernel: rcu: RCU event tracing is enabled. Jan 13 22:50:41.990710 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 22:50:41.990715 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 22:50:41.990720 kernel: Rude variant of Tasks RCU enabled. Jan 13 22:50:41.990726 kernel: Tracing variant of Tasks RCU enabled. Jan 13 22:50:41.990731 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 22:50:41.990736 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 22:50:41.990742 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 13 22:50:41.990747 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 22:50:41.990752 kernel: Console: colour dummy device 80x25 Jan 13 22:50:41.990758 kernel: printk: console [tty0] enabled Jan 13 22:50:41.990764 kernel: printk: console [ttyS1] enabled Jan 13 22:50:41.990769 kernel: ACPI: Core revision 20230628 Jan 13 22:50:41.990774 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 13 22:50:41.990780 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 22:50:41.990785 kernel: DMAR: Host address width 39 Jan 13 22:50:41.990790 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 13 22:50:41.990796 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 13 22:50:41.990801 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 13 22:50:41.990807 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 13 22:50:41.990813 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 13 22:50:41.990818 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 13 22:50:41.990823 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 13 22:50:41.990829 kernel: x2apic enabled Jan 13 22:50:41.990834 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 13 22:50:41.990839 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 13 22:50:41.990845 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 13 22:50:41.990850 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 13 22:50:41.990856 kernel: process: using mwait in idle threads Jan 13 22:50:41.990862 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 22:50:41.990867 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 22:50:41.990872 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 22:50:41.990877 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 22:50:41.990882 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 22:50:41.990888 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 13 22:50:41.990893 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 22:50:41.990898 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 13 22:50:41.990903 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 13 22:50:41.990908 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 22:50:41.990915 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 22:50:41.990920 kernel: TAA: Mitigation: TSX disabled Jan 13 22:50:41.990925 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 13 22:50:41.990931 kernel: SRBDS: Mitigation: Microcode Jan 13 22:50:41.990936 kernel: GDS: Mitigation: Microcode Jan 13 22:50:41.990941 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 22:50:41.990946 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 22:50:41.990952 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 22:50:41.990957 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 22:50:41.990962 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 22:50:41.990967 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 22:50:41.990973 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 22:50:41.990979 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 22:50:41.990984 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 13 22:50:41.990989 kernel: Freeing SMP alternatives memory: 32K Jan 13 22:50:41.990994 kernel: pid_max: default: 32768 minimum: 301 Jan 13 22:50:41.991000 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 22:50:41.991005 kernel: landlock: Up and running. Jan 13 22:50:41.991010 kernel: SELinux: Initializing. Jan 13 22:50:41.991016 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.991021 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.991026 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 13 22:50:41.991032 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:50:41.991038 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:50:41.991043 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:50:41.991049 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 13 22:50:41.991054 kernel: ... version: 4 Jan 13 22:50:41.991059 kernel: ... bit width: 48 Jan 13 22:50:41.991064 kernel: ... generic registers: 4 Jan 13 22:50:41.991070 kernel: ... value mask: 0000ffffffffffff Jan 13 22:50:41.991075 kernel: ... max period: 00007fffffffffff Jan 13 22:50:41.991081 kernel: ... fixed-purpose events: 3 Jan 13 22:50:41.991087 kernel: ... event mask: 000000070000000f Jan 13 22:50:41.991092 kernel: signal: max sigframe size: 2032 Jan 13 22:50:41.991097 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 13 22:50:41.991103 kernel: rcu: Hierarchical SRCU implementation. Jan 13 22:50:41.991108 kernel: rcu: Max phase no-delay instances is 400. Jan 13 22:50:41.991113 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 13 22:50:41.991119 kernel: smp: Bringing up secondary CPUs ... Jan 13 22:50:41.991124 kernel: smpboot: x86: Booting SMP configuration: Jan 13 22:50:41.991130 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 13 22:50:41.991136 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 22:50:41.991141 kernel: smp: Brought up 1 node, 16 CPUs Jan 13 22:50:41.991147 kernel: smpboot: Max logical packages: 1 Jan 13 22:50:41.991152 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 13 22:50:41.991157 kernel: devtmpfs: initialized Jan 13 22:50:41.991163 kernel: x86/mm: Memory block size: 128MB Jan 13 22:50:41.991170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b2b000-0x81b2bfff] (4096 bytes) Jan 13 22:50:41.991175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 13 22:50:41.991182 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 22:50:41.991207 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 22:50:41.991212 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 22:50:41.991218 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 22:50:41.991237 kernel: audit: initializing netlink subsys (disabled) Jan 13 22:50:41.991242 kernel: audit: type=2000 audit(1736808636.039:1): state=initialized audit_enabled=0 res=1 Jan 13 22:50:41.991247 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 22:50:41.991253 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 22:50:41.991258 kernel: cpuidle: using governor menu Jan 13 22:50:41.991264 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 22:50:41.991269 kernel: dca service started, version 1.12.1 Jan 13 22:50:41.991275 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 22:50:41.991280 kernel: PCI: Using configuration type 1 for base access Jan 13 22:50:41.991285 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 13 22:50:41.991290 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 22:50:41.991296 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 22:50:41.991301 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 22:50:41.991307 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 22:50:41.991313 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 22:50:41.991318 kernel: ACPI: Added _OSI(Module Device) Jan 13 22:50:41.991323 kernel: ACPI: Added _OSI(Processor Device) Jan 13 22:50:41.991329 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 22:50:41.991334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 22:50:41.991339 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 13 22:50:41.991345 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991350 kernel: ACPI: SSDT 0xFFFF965BC1EC6800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 13 22:50:41.991355 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991362 kernel: ACPI: SSDT 0xFFFF965BC1EBF000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 13 22:50:41.991367 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991372 kernel: ACPI: SSDT 0xFFFF965BC1568700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 13 22:50:41.991377 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991382 kernel: ACPI: SSDT 0xFFFF965BC1EBC800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 13 22:50:41.991388 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991393 kernel: ACPI: SSDT 0xFFFF965BC1ECA000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 13 22:50:41.991398 kernel: ACPI: Dynamic OEM Table Load: Jan 13 22:50:41.991404 kernel: ACPI: SSDT 0xFFFF965BC0E3B000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 13 22:50:41.991410 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 13 22:50:41.991415 kernel: ACPI: Interpreter enabled Jan 13 22:50:41.991420 kernel: ACPI: PM: (supports S0 S5) Jan 13 22:50:41.991426 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 22:50:41.991431 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 13 22:50:41.991436 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 13 22:50:41.991441 kernel: HEST: Table parsing has been initialized. Jan 13 22:50:41.991447 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 13 22:50:41.991452 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 22:50:41.991458 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 22:50:41.991464 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 13 22:50:41.991469 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 13 22:50:41.991474 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 13 22:50:41.991480 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 13 22:50:41.991485 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 13 22:50:41.991490 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 13 22:50:41.991496 kernel: ACPI: \_TZ_.FN00: New power resource Jan 13 22:50:41.991501 kernel: ACPI: \_TZ_.FN01: New power resource Jan 13 22:50:41.991508 kernel: ACPI: \_TZ_.FN02: New power resource Jan 13 22:50:41.991513 kernel: ACPI: \_TZ_.FN03: New power resource Jan 13 22:50:41.991518 kernel: ACPI: \_TZ_.FN04: New power resource Jan 13 22:50:41.991523 kernel: ACPI: \PIN_: New power resource Jan 13 22:50:41.991529 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 13 22:50:41.991598 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 22:50:41.991650 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 13 22:50:41.991696 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 13 22:50:41.991705 kernel: PCI host bridge to bus 0000:00 Jan 13 22:50:41.991754 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 22:50:41.991796 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 22:50:41.991836 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 22:50:41.991877 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 13 22:50:41.991917 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 13 22:50:41.991957 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 13 22:50:41.992014 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 13 22:50:41.992069 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 13 22:50:41.992118 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.992170 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 13 22:50:41.992250 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 13 22:50:41.992300 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 13 22:50:41.992350 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 13 22:50:41.992400 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 13 22:50:41.992447 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 13 22:50:41.992494 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 13 22:50:41.992544 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 13 22:50:41.992590 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 13 22:50:41.992637 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 13 22:50:41.992688 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 13 22:50:41.992734 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:50:41.992786 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 13 22:50:41.992831 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:50:41.992881 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 13 22:50:41.992928 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 13 22:50:41.992977 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 13 22:50:41.993032 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 13 22:50:41.993080 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 13 22:50:41.993125 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 13 22:50:41.993193 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 13 22:50:41.993256 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 13 22:50:41.993304 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 13 22:50:41.993356 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 13 22:50:41.993403 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 13 22:50:41.993449 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 13 22:50:41.993494 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 13 22:50:41.993540 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 13 22:50:41.993585 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 13 22:50:41.993634 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 13 22:50:41.993679 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 13 22:50:41.993730 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 13 22:50:41.993777 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.993832 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 13 22:50:41.993878 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.993929 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 13 22:50:41.993975 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.994027 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 13 22:50:41.994076 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.994126 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 13 22:50:41.994195 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.994267 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 13 22:50:41.994314 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 13 22:50:41.994363 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 13 22:50:41.994413 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 13 22:50:41.994461 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 13 22:50:41.994509 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 13 22:50:41.994561 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 13 22:50:41.994609 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 13 22:50:41.994661 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 13 22:50:41.994710 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 13 22:50:41.994760 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 13 22:50:41.994807 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 13 22:50:41.994855 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:50:41.994902 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:50:41.994956 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 13 22:50:41.995003 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 13 22:50:41.995051 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 13 22:50:41.995100 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 13 22:50:41.995148 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 13 22:50:41.995222 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 13 22:50:41.995285 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:50:41.995332 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 22:50:41.995378 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:50:41.995426 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 22:50:41.995477 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 13 22:50:41.995528 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:50:41.995575 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 13 22:50:41.995623 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 13 22:50:41.995671 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 13 22:50:41.995719 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.995766 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 22:50:41.995813 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:50:41.995861 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 22:50:41.995912 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 13 22:50:41.995961 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 13 22:50:41.996008 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 13 22:50:41.996057 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 13 22:50:41.996103 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 13 22:50:41.996151 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 13 22:50:41.996223 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 22:50:41.996283 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:50:41.996330 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 22:50:41.996376 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 22:50:41.996428 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 13 22:50:41.996476 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 13 22:50:41.996525 kernel: pci 0000:06:00.0: supports D1 D2 Jan 13 22:50:41.996571 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:50:41.996623 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 22:50:41.996670 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.996719 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.996772 kernel: pci_bus 0000:07: extended config space not accessible Jan 13 22:50:41.996826 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 13 22:50:41.996876 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 13 22:50:41.996926 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 13 22:50:41.996978 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 13 22:50:41.997028 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 22:50:41.997077 kernel: pci 0000:07:00.0: supports D1 D2 Jan 13 22:50:41.997126 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 22:50:41.997176 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 22:50:41.997270 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.997317 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.997327 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 13 22:50:41.997333 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 13 22:50:41.997339 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 13 22:50:41.997344 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 13 22:50:41.997350 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 13 22:50:41.997356 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 13 22:50:41.997361 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 13 22:50:41.997367 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 13 22:50:41.997373 kernel: iommu: Default domain type: Translated Jan 13 22:50:41.997379 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 22:50:41.997385 kernel: PCI: Using ACPI for IRQ routing Jan 13 22:50:41.997391 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 22:50:41.997396 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 13 22:50:41.997402 kernel: e820: reserve RAM buffer [mem 0x81b2b000-0x83ffffff] Jan 13 22:50:41.997407 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 13 22:50:41.997413 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 13 22:50:41.997418 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 13 22:50:41.997424 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 13 22:50:41.997475 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 13 22:50:41.997524 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 13 22:50:41.997573 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 22:50:41.997581 kernel: vgaarb: loaded Jan 13 22:50:41.997587 kernel: clocksource: Switched to clocksource tsc-early Jan 13 22:50:41.997593 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 22:50:41.997599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 22:50:41.997604 kernel: pnp: PnP ACPI init Jan 13 22:50:41.997652 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 13 22:50:41.997701 kernel: pnp 00:02: [dma 0 disabled] Jan 13 22:50:41.997747 kernel: pnp 00:03: [dma 0 disabled] Jan 13 22:50:41.997795 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 13 22:50:41.997839 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 13 22:50:41.997884 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 13 22:50:41.997930 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 13 22:50:41.997974 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 13 22:50:41.998017 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 13 22:50:41.998059 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 13 22:50:41.998103 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 13 22:50:41.998146 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 13 22:50:41.998215 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 13 22:50:41.998278 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 13 22:50:41.998325 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 13 22:50:41.998368 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 13 22:50:41.998410 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 13 22:50:41.998451 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 13 22:50:41.998492 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 13 22:50:41.998534 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 13 22:50:41.998578 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 13 22:50:41.998623 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 13 22:50:41.998631 kernel: pnp: PnP ACPI: found 10 devices Jan 13 22:50:41.998637 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 22:50:41.998643 kernel: NET: Registered PF_INET protocol family Jan 13 22:50:41.998649 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:50:41.998655 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 13 22:50:41.998662 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 22:50:41.998668 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:50:41.998675 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 22:50:41.998680 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 13 22:50:41.998686 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.998692 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:50:41.998697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 22:50:41.998703 kernel: NET: Registered PF_XDP protocol family Jan 13 22:50:41.998749 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 13 22:50:41.998796 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 13 22:50:41.998844 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 13 22:50:41.998894 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:50:41.998944 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:50:41.998991 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 13 22:50:41.999038 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 13 22:50:41.999085 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 22:50:41.999131 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 13 22:50:41.999202 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:50:41.999272 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 13 22:50:41.999317 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 13 22:50:41.999364 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 13 22:50:41.999409 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 13 22:50:41.999455 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 13 22:50:41.999504 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 13 22:50:41.999550 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 13 22:50:41.999597 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 13 22:50:41.999643 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 13 22:50:41.999691 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.999737 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.999784 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 13 22:50:41.999830 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 13 22:50:41.999877 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 13 22:50:41.999921 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 13 22:50:41.999963 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 22:50:42.000004 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 22:50:42.000045 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 22:50:42.000085 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 13 22:50:42.000125 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 13 22:50:42.000174 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 13 22:50:42.000262 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 13 22:50:42.000312 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 13 22:50:42.000356 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 13 22:50:42.000404 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 22:50:42.000446 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 13 22:50:42.000493 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 13 22:50:42.000537 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 13 22:50:42.000582 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 13 22:50:42.000626 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 13 22:50:42.000634 kernel: PCI: CLS 64 bytes, default 64 Jan 13 22:50:42.000639 kernel: DMAR: No ATSR found Jan 13 22:50:42.000645 kernel: DMAR: No SATC found Jan 13 22:50:42.000651 kernel: DMAR: dmar0: Using Queued invalidation Jan 13 22:50:42.000697 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 13 22:50:42.000746 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 13 22:50:42.000793 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 13 22:50:42.000839 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 13 22:50:42.000886 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 13 22:50:42.000931 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 13 22:50:42.000977 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 13 22:50:42.001022 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 13 22:50:42.001068 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 13 22:50:42.001114 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 13 22:50:42.001161 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 13 22:50:42.001253 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 13 22:50:42.001298 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 13 22:50:42.001345 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 13 22:50:42.001390 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 13 22:50:42.001437 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 13 22:50:42.001482 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 13 22:50:42.001529 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 13 22:50:42.001576 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 13 22:50:42.001623 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 13 22:50:42.001668 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 13 22:50:42.001715 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 13 22:50:42.001762 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 13 22:50:42.001809 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 13 22:50:42.001857 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 13 22:50:42.001904 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 13 22:50:42.001955 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 13 22:50:42.001964 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 13 22:50:42.001969 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 22:50:42.001975 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 13 22:50:42.001981 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 13 22:50:42.001987 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 13 22:50:42.001992 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 13 22:50:42.001998 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 13 22:50:42.002047 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 13 22:50:42.002057 kernel: Initialise system trusted keyrings Jan 13 22:50:42.002063 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 13 22:50:42.002069 kernel: Key type asymmetric registered Jan 13 22:50:42.002074 kernel: Asymmetric key parser 'x509' registered Jan 13 22:50:42.002080 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 22:50:42.002085 kernel: io scheduler mq-deadline registered Jan 13 22:50:42.002091 kernel: io scheduler kyber registered Jan 13 22:50:42.002097 kernel: io scheduler bfq registered Jan 13 22:50:42.002144 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 13 22:50:42.002236 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 13 22:50:42.002284 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 13 22:50:42.002329 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 13 22:50:42.002375 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 13 22:50:42.002422 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 13 22:50:42.002471 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 13 22:50:42.002481 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 13 22:50:42.002487 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 13 22:50:42.002493 kernel: pstore: Using crash dump compression: deflate Jan 13 22:50:42.002499 kernel: pstore: Registered erst as persistent store backend Jan 13 22:50:42.002505 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 22:50:42.002510 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 22:50:42.002516 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 22:50:42.002522 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 22:50:42.002527 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 13 22:50:42.002577 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 13 22:50:42.002586 kernel: i8042: PNP: No PS/2 controller found. Jan 13 22:50:42.002628 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 13 22:50:42.002671 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 13 22:50:42.002713 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-13T22:50:40 UTC (1736808640) Jan 13 22:50:42.002755 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 13 22:50:42.002763 kernel: intel_pstate: Intel P-state driver initializing Jan 13 22:50:42.002771 kernel: intel_pstate: Disabling energy efficiency optimization Jan 13 22:50:42.002776 kernel: intel_pstate: HWP enabled Jan 13 22:50:42.002782 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 13 22:50:42.002788 kernel: vesafb: scrolling: redraw Jan 13 22:50:42.002793 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 13 22:50:42.002799 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000089463221, using 768k, total 768k Jan 13 22:50:42.002805 kernel: Console: switching to colour frame buffer device 128x48 Jan 13 22:50:42.002810 kernel: fb0: VESA VGA frame buffer device Jan 13 22:50:42.002816 kernel: NET: Registered PF_INET6 protocol family Jan 13 22:50:42.002822 kernel: Segment Routing with IPv6 Jan 13 22:50:42.002829 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 22:50:42.002834 kernel: NET: Registered PF_PACKET protocol family Jan 13 22:50:42.002840 kernel: Key type dns_resolver registered Jan 13 22:50:42.002845 kernel: microcode: Microcode Update Driver: v2.2. Jan 13 22:50:42.002851 kernel: IPI shorthand broadcast: enabled Jan 13 22:50:42.002857 kernel: sched_clock: Marking stable (2476000680, 1385589175)->(4405521419, -543931564) Jan 13 22:50:42.002863 kernel: registered taskstats version 1 Jan 13 22:50:42.002868 kernel: Loading compiled-in X.509 certificates Jan 13 22:50:42.002874 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 22:50:42.002880 kernel: Key type .fscrypt registered Jan 13 22:50:42.002886 kernel: Key type fscrypt-provisioning registered Jan 13 22:50:42.002892 kernel: ima: Allocated hash algorithm: sha1 Jan 13 22:50:42.002897 kernel: ima: No architecture policies found Jan 13 22:50:42.002903 kernel: clk: Disabling unused clocks Jan 13 22:50:42.002909 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 22:50:42.002914 kernel: Write protecting the kernel read-only data: 36864k Jan 13 22:50:42.002920 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 22:50:42.002927 kernel: Run /init as init process Jan 13 22:50:42.002932 kernel: with arguments: Jan 13 22:50:42.002938 kernel: /init Jan 13 22:50:42.002943 kernel: with environment: Jan 13 22:50:42.002949 kernel: HOME=/ Jan 13 22:50:42.002954 kernel: TERM=linux Jan 13 22:50:42.002960 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 22:50:42.002967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:50:42.002975 systemd[1]: Detected architecture x86-64. Jan 13 22:50:42.002981 systemd[1]: Running in initrd. Jan 13 22:50:42.002987 systemd[1]: No hostname configured, using default hostname. Jan 13 22:50:42.002993 systemd[1]: Hostname set to . Jan 13 22:50:42.002998 systemd[1]: Initializing machine ID from random generator. Jan 13 22:50:42.003004 systemd[1]: Queued start job for default target initrd.target. Jan 13 22:50:42.003010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:50:42.003016 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:50:42.003023 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 22:50:42.003029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:50:42.003035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 22:50:42.003041 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 22:50:42.003048 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 22:50:42.003054 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 22:50:42.003060 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 13 22:50:42.003067 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 13 22:50:42.003072 kernel: clocksource: Switched to clocksource tsc Jan 13 22:50:42.003078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:50:42.003084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:50:42.003090 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:50:42.003096 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:50:42.003102 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:50:42.003108 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:50:42.003115 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:50:42.003121 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:50:42.003127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 22:50:42.003133 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 22:50:42.003139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:50:42.003144 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:50:42.003150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:50:42.003156 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:50:42.003162 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 22:50:42.003171 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:50:42.003177 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 22:50:42.003183 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 22:50:42.003215 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:50:42.003251 systemd-journald[267]: Collecting audit messages is disabled. Jan 13 22:50:42.003266 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:50:42.003272 systemd-journald[267]: Journal started Jan 13 22:50:42.003285 systemd-journald[267]: Runtime Journal (/run/log/journal/bec9d4e0f2194ff59fcade315701280d) is 8.0M, max 639.9M, 631.9M free. Jan 13 22:50:42.017282 systemd-modules-load[268]: Inserted module 'overlay' Jan 13 22:50:42.046172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:50:42.089232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 22:50:42.089265 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:50:42.108682 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 22:50:42.126105 kernel: Bridge firewalling registered Jan 13 22:50:42.126087 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 13 22:50:42.126196 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:50:42.176591 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 22:50:42.185469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:50:42.202512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:50:42.239492 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:50:42.250785 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:50:42.251162 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:50:42.251580 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:50:42.256230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:50:42.256967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:50:42.257079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:50:42.257997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:50:42.259067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:50:42.263009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:50:42.265514 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:50:42.283148 systemd-resolved[299]: Positive Trust Anchors: Jan 13 22:50:42.283156 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:50:42.283206 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:50:42.285490 systemd-resolved[299]: Defaulting to hostname 'linux'. Jan 13 22:50:42.286439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:50:42.318780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:50:42.361457 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 22:50:42.453968 dracut-cmdline[309]: dracut-dracut-053 Jan 13 22:50:42.461377 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 22:50:42.665196 kernel: SCSI subsystem initialized Jan 13 22:50:42.688201 kernel: Loading iSCSI transport class v2.0-870. Jan 13 22:50:42.712226 kernel: iscsi: registered transport (tcp) Jan 13 22:50:42.743163 kernel: iscsi: registered transport (qla4xxx) Jan 13 22:50:42.743184 kernel: QLogic iSCSI HBA Driver Jan 13 22:50:42.776506 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 22:50:42.802420 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 22:50:42.860147 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 22:50:42.860166 kernel: device-mapper: uevent: version 1.0.3 Jan 13 22:50:42.879995 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 22:50:42.937245 kernel: raid6: avx2x4 gen() 53085 MB/s Jan 13 22:50:42.969244 kernel: raid6: avx2x2 gen() 53707 MB/s Jan 13 22:50:43.005673 kernel: raid6: avx2x1 gen() 45083 MB/s Jan 13 22:50:43.005692 kernel: raid6: using algorithm avx2x2 gen() 53707 MB/s Jan 13 22:50:43.053730 kernel: raid6: .... xor() 31132 MB/s, rmw enabled Jan 13 22:50:43.053747 kernel: raid6: using avx2x2 recovery algorithm Jan 13 22:50:43.095203 kernel: xor: automatically using best checksumming function avx Jan 13 22:50:43.207181 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 22:50:43.213209 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:50:43.241489 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:50:43.248208 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jan 13 22:50:43.252250 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:50:43.275004 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 22:50:43.331978 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jan 13 22:50:43.347916 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:50:43.383547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:50:43.440928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:50:43.476510 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 22:50:43.476543 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 22:50:43.486823 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 22:50:43.507236 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 22:50:43.489283 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:50:43.489380 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:50:43.508171 kernel: libata version 3.00 loaded. Jan 13 22:50:43.532021 kernel: ACPI: bus type USB registered Jan 13 22:50:43.532046 kernel: usbcore: registered new interface driver usbfs Jan 13 22:50:43.532059 kernel: PTP clock support registered Jan 13 22:50:43.532069 kernel: usbcore: registered new interface driver hub Jan 13 22:50:43.556558 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:50:43.618236 kernel: usbcore: registered new device driver usb Jan 13 22:50:43.618251 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 22:50:43.618259 kernel: AES CTR mode by8 optimization enabled Jan 13 22:50:43.618266 kernel: ahci 0000:00:17.0: version 3.0 Jan 13 22:50:43.999081 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:50:43.999174 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 13 22:50:43.999242 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 13 22:50:43.999305 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 13 22:50:43.999365 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 13 22:50:43.999425 kernel: scsi host0: ahci Jan 13 22:50:43.999488 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 13 22:50:43.999550 kernel: scsi host1: ahci Jan 13 22:50:43.999614 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 13 22:50:43.999675 kernel: scsi host2: ahci Jan 13 22:50:43.999734 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 13 22:50:43.999793 kernel: scsi host3: ahci Jan 13 22:50:43.999851 kernel: hub 1-0:1.0: USB hub found Jan 13 22:50:43.999914 kernel: scsi host4: ahci Jan 13 22:50:43.999972 kernel: hub 1-0:1.0: 16 ports detected Jan 13 22:50:44.000031 kernel: scsi host5: ahci Jan 13 22:50:44.000090 kernel: hub 2-0:1.0: USB hub found Jan 13 22:50:44.000158 kernel: scsi host6: ahci Jan 13 22:50:44.000233 kernel: hub 2-0:1.0: 10 ports detected Jan 13 22:50:44.000300 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 13 22:50:44.000311 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 13 22:50:44.000318 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 13 22:50:44.000325 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 13 22:50:44.000332 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 13 22:50:44.000339 kernel: pps pps0: new PPS source ptp0 Jan 13 22:50:44.000402 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 13 22:50:44.000410 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 13 22:50:44.100293 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 13 22:50:44.100306 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 13 22:50:44.150266 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:50:44.150343 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 13 22:50:44.150353 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c2 Jan 13 22:50:44.150418 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 13 22:50:44.150426 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 13 22:50:44.150489 kernel: hub 1-14:1.0: USB hub found Jan 13 22:50:44.150560 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:50:44.150625 kernel: pps pps1: new PPS source ptp1 Jan 13 22:50:44.150687 kernel: hub 1-14:1.0: 4 ports detected Jan 13 22:50:44.150749 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 13 22:50:44.286544 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 13 22:50:44.286618 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:c3 Jan 13 22:50:44.286684 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 13 22:50:44.286750 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 13 22:50:43.590580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:50:44.498264 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498353 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:50:44.498361 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498368 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498378 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498385 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 13 22:50:44.498392 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:50:44.498400 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 22:50:44.498409 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 13 22:50:44.498416 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 13 22:50:44.498435 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:50:44.498443 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 13 22:50:43.590725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:50:44.563696 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Jan 13 22:50:44.983319 kernel: ata1.00: Features: NCQ-prio Jan 13 22:50:44.983330 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:50:44.983406 kernel: ata2.00: Features: NCQ-prio Jan 13 22:50:44.983415 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 22:50:44.983423 kernel: ata1.00: configured for UDMA/133 Jan 13 22:50:44.983430 kernel: ata2.00: configured for UDMA/133 Jan 13 22:50:44.983437 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:50:45.297665 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 13 22:50:45.297745 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 13 22:50:45.297816 kernel: usbcore: registered new interface driver usbhid Jan 13 22:50:45.297825 kernel: usbhid: USB HID core driver Jan 13 22:50:45.297832 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 13 22:50:45.297840 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 13 22:50:45.297903 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.297913 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 13 22:50:45.297985 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:50:45.297993 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:50:45.298052 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 13 22:50:45.298110 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 13 22:50:45.298165 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 13 22:50:45.298259 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 13 22:50:45.298317 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:50:45.298374 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 13 22:50:45.298430 kernel: ata2.00: Enabling discard_zeroes_data Jan 13 22:50:45.298438 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 13 22:50:45.298493 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:50:45.298555 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 13 22:50:45.298615 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 13 22:50:45.298624 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 13 22:50:45.298682 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 13 22:50:45.298750 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 13 22:50:45.298807 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:50:45.298867 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 13 22:50:45.298924 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 22:50:45.298980 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Jan 13 22:50:45.583091 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 13 22:50:45.583194 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 13 22:50:45.583281 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.583290 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 22:50:45.583298 kernel: GPT:9289727 != 937703087 Jan 13 22:50:45.583305 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 22:50:45.583312 kernel: GPT:9289727 != 937703087 Jan 13 22:50:45.583318 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 22:50:45.583325 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:45.583334 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 13 22:50:45.583396 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 13 22:50:45.583459 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 13 22:50:45.583519 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (567) Jan 13 22:50:45.583528 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (574) Jan 13 22:50:45.583535 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.583542 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:45.583549 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 13 22:50:45.583611 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:45.583618 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:43.668267 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:50:45.615294 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:44.451356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:50:45.636245 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:44.531631 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 22:50:45.657277 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 13 22:50:44.565709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:50:44.607765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:50:45.702280 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 13 22:50:45.702404 disk-uuid[712]: Primary Header is updated. Jan 13 22:50:45.702404 disk-uuid[712]: Secondary Entries is updated. Jan 13 22:50:45.702404 disk-uuid[712]: Secondary Header is updated. Jan 13 22:50:44.627446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:50:44.668334 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 22:50:44.721623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:50:44.761356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:50:45.332320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:50:45.351057 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 13 22:50:45.399419 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:50:45.433000 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 13 22:50:45.448291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 22:50:45.464581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:50:45.476294 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 13 22:50:45.497492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 22:50:46.620624 kernel: ata1.00: Enabling discard_zeroes_data Jan 13 22:50:46.640058 disk-uuid[713]: The operation has completed successfully. Jan 13 22:50:46.649416 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jan 13 22:50:46.675866 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 22:50:46.675913 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 22:50:46.716595 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 22:50:46.754283 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 22:50:46.754298 sh[735]: Success Jan 13 22:50:46.789951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 22:50:46.810039 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 22:50:46.819583 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 22:50:46.872139 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 22:50:46.872194 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:46.893251 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 22:50:46.912139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 22:50:46.929939 kernel: BTRFS info (device dm-0): using free space tree Jan 13 22:50:46.967175 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 22:50:46.970474 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 22:50:46.979701 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 22:50:46.989282 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 22:50:47.120526 kernel: BTRFS info (device sdb6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:47.120540 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:47.120547 kernel: BTRFS info (device sdb6): using free space tree Jan 13 22:50:47.120554 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 22:50:47.120562 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 22:50:47.120571 kernel: BTRFS info (device sdb6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:47.132497 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 22:50:47.132929 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 22:50:47.170368 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 22:50:47.185318 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:50:47.190363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:50:47.233959 ignition[877]: Ignition 2.19.0 Jan 13 22:50:47.233964 ignition[877]: Stage: fetch-offline Jan 13 22:50:47.236182 unknown[877]: fetched base config from "system" Jan 13 22:50:47.233988 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:47.236186 unknown[877]: fetched user config from "system" Jan 13 22:50:47.233993 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:47.237061 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:50:47.234052 ignition[877]: parsed url from cmdline: "" Jan 13 22:50:47.240158 systemd-networkd[918]: lo: Link UP Jan 13 22:50:47.234053 ignition[877]: no config URL provided Jan 13 22:50:47.240160 systemd-networkd[918]: lo: Gained carrier Jan 13 22:50:47.234056 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:50:47.242452 systemd-networkd[918]: Enumeration completed Jan 13 22:50:47.234079 ignition[877]: parsing config with SHA512: e1e4e4faaadf6e31531ab2a79780ff859a84f4f3d183a50e6f5da4051c7fa709de68b2a371ce7285312a9fba18e016bb27a68170aec49d49884c91338bbf886d Jan 13 22:50:47.242525 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:50:47.236398 ignition[877]: fetch-offline: fetch-offline passed Jan 13 22:50:47.243207 systemd-networkd[918]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.236401 ignition[877]: POST message to Packet Timeline Jan 13 22:50:47.254500 systemd[1]: Reached target network.target - Network. Jan 13 22:50:47.236403 ignition[877]: POST Status error: resource requires networking Jan 13 22:50:47.271192 systemd-networkd[918]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.236439 ignition[877]: Ignition finished successfully Jan 13 22:50:47.285513 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 22:50:47.303240 ignition[932]: Ignition 2.19.0 Jan 13 22:50:47.291417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 22:50:47.303246 ignition[932]: Stage: kargs Jan 13 22:50:47.299357 systemd-networkd[918]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.303413 ignition[932]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:47.303423 ignition[932]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:47.513331 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 22:50:47.504436 systemd-networkd[918]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:50:47.304228 ignition[932]: kargs: kargs passed Jan 13 22:50:47.304232 ignition[932]: POST message to Packet Timeline Jan 13 22:50:47.304244 ignition[932]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:47.304920 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51641->[::1]:53: read: connection refused Jan 13 22:50:47.505513 ignition[932]: GET https://metadata.packet.net/metadata: attempt #2 Jan 13 22:50:47.505981 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49991->[::1]:53: read: connection refused Jan 13 22:50:47.744288 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 22:50:47.745281 systemd-networkd[918]: eno1: Link UP Jan 13 22:50:47.745461 systemd-networkd[918]: eno2: Link UP Jan 13 22:50:47.745590 systemd-networkd[918]: enp1s0f0np0: Link UP Jan 13 22:50:47.745740 systemd-networkd[918]: enp1s0f0np0: Gained carrier Jan 13 22:50:47.756440 systemd-networkd[918]: enp1s0f1np1: Link UP Jan 13 22:50:47.795443 systemd-networkd[918]: enp1s0f0np0: DHCPv4 address 147.28.180.253/31, gateway 147.28.180.252 acquired from 145.40.83.140 Jan 13 22:50:47.906307 ignition[932]: GET https://metadata.packet.net/metadata: attempt #3 Jan 13 22:50:47.907357 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56441->[::1]:53: read: connection refused Jan 13 22:50:48.561948 systemd-networkd[918]: enp1s0f1np1: Gained carrier Jan 13 22:50:48.707805 ignition[932]: GET https://metadata.packet.net/metadata: attempt #4 Jan 13 22:50:48.708843 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34355->[::1]:53: read: connection refused Jan 13 22:50:48.753592 systemd-networkd[918]: enp1s0f0np0: Gained IPv6LL Jan 13 22:50:49.649770 systemd-networkd[918]: enp1s0f1np1: Gained IPv6LL Jan 13 22:50:50.309319 ignition[932]: GET https://metadata.packet.net/metadata: attempt #5 Jan 13 22:50:50.310418 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54516->[::1]:53: read: connection refused Jan 13 22:50:53.513160 ignition[932]: GET https://metadata.packet.net/metadata: attempt #6 Jan 13 22:50:54.157848 ignition[932]: GET result: OK Jan 13 22:50:54.515960 ignition[932]: Ignition finished successfully Jan 13 22:50:54.519099 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 22:50:54.548469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 22:50:54.556001 ignition[950]: Ignition 2.19.0 Jan 13 22:50:54.556005 ignition[950]: Stage: disks Jan 13 22:50:54.556105 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:54.556110 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:54.556660 ignition[950]: disks: disks passed Jan 13 22:50:54.556662 ignition[950]: POST message to Packet Timeline Jan 13 22:50:54.556670 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:55.339604 ignition[950]: GET result: OK Jan 13 22:50:55.686574 ignition[950]: Ignition finished successfully Jan 13 22:50:55.688449 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 22:50:55.705407 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 22:50:55.724592 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 22:50:55.734750 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:50:55.756736 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:50:55.784473 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:50:55.806468 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 22:50:55.845536 systemd-fsck[970]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 22:50:55.856612 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 22:50:55.875242 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 22:50:55.975173 kernel: EXT4-fs (sdb9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 22:50:55.975137 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 22:50:55.984667 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 22:50:56.019350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:50:56.028271 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 22:50:56.151253 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (980) Jan 13 22:50:56.151339 kernel: BTRFS info (device sdb6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:56.151347 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:56.151355 kernel: BTRFS info (device sdb6): using free space tree Jan 13 22:50:56.151362 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 22:50:56.151369 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 22:50:56.049119 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 22:50:56.162556 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 13 22:50:56.185278 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 22:50:56.231429 coreos-metadata[982]: Jan 13 22:50:56.207 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:50:56.185298 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:50:56.263331 coreos-metadata[998]: Jan 13 22:50:56.207 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:50:56.186308 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:50:56.212479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 22:50:56.301311 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 22:50:56.262494 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 22:50:56.321297 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Jan 13 22:50:56.331284 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 22:50:56.341270 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 22:50:56.348717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 22:50:56.385675 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 22:50:56.396419 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 22:50:56.448220 kernel: BTRFS info (device sdb6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:56.430661 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 22:50:56.456307 ignition[1100]: INFO : Ignition 2.19.0 Jan 13 22:50:56.456307 ignition[1100]: INFO : Stage: mount Jan 13 22:50:56.456307 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:56.456307 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:56.456307 ignition[1100]: INFO : mount: mount passed Jan 13 22:50:56.456307 ignition[1100]: INFO : POST message to Packet Timeline Jan 13 22:50:56.456307 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:56.457209 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 22:50:56.836785 coreos-metadata[982]: Jan 13 22:50:56.836 INFO Fetch successful Jan 13 22:50:56.907423 coreos-metadata[998]: Jan 13 22:50:56.907 INFO Fetch successful Jan 13 22:50:56.915274 coreos-metadata[982]: Jan 13 22:50:56.914 INFO wrote hostname ci-4081.3.0-a-66cd838664 to /sysroot/etc/hostname Jan 13 22:50:56.915756 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:50:56.938578 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 13 22:50:56.938621 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 13 22:50:57.143557 ignition[1100]: INFO : GET result: OK Jan 13 22:50:57.484066 ignition[1100]: INFO : Ignition finished successfully Jan 13 22:50:57.485419 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 22:50:57.517394 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 22:50:57.528317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:50:57.590798 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1126) Jan 13 22:50:57.590817 kernel: BTRFS info (device sdb6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 22:50:57.609820 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:50:57.626723 kernel: BTRFS info (device sdb6): using free space tree Jan 13 22:50:57.663331 kernel: BTRFS info (device sdb6): enabling ssd optimizations Jan 13 22:50:57.663353 kernel: BTRFS info (device sdb6): auto enabling async discard Jan 13 22:50:57.676055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:50:57.707797 ignition[1143]: INFO : Ignition 2.19.0 Jan 13 22:50:57.707797 ignition[1143]: INFO : Stage: files Jan 13 22:50:57.722408 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:50:57.722408 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:50:57.722408 ignition[1143]: DEBUG : files: compiled without relabeling support, skipping Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 22:50:57.722408 ignition[1143]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:50:57.722408 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 22:50:57.711954 unknown[1143]: wrote ssh authorized keys file for user: core Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:57.853260 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:58.103515 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 22:50:58.406737 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 22:50:59.239713 ignition[1143]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:50:59.239713 ignition[1143]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:50:59.270484 ignition[1143]: INFO : files: files passed Jan 13 22:50:59.270484 ignition[1143]: INFO : POST message to Packet Timeline Jan 13 22:50:59.270484 ignition[1143]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:50:59.872205 ignition[1143]: INFO : GET result: OK Jan 13 22:51:00.254615 ignition[1143]: INFO : Ignition finished successfully Jan 13 22:51:00.257699 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 22:51:00.288454 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 22:51:00.288856 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 22:51:00.306701 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 22:51:00.306767 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 22:51:00.339809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:51:00.359549 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 22:51:00.401440 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:51:00.401440 initrd-setup-root-after-ignition[1180]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:51:00.396620 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 22:51:00.452442 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:51:00.469268 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 22:51:00.469315 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 22:51:00.487560 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 22:51:00.508381 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 22:51:00.528487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 22:51:00.546605 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 22:51:00.614753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:51:00.649828 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 22:51:00.668839 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:51:00.672449 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:51:00.704576 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 22:51:00.723586 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 22:51:00.723744 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:51:00.751126 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 22:51:00.772894 systemd[1]: Stopped target basic.target - Basic System. Jan 13 22:51:00.791900 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 22:51:00.809877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:51:00.820055 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 22:51:00.851878 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 22:51:00.871885 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:51:00.893909 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 22:51:00.915910 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 22:51:00.926057 systemd[1]: Stopped target swap.target - Swaps. Jan 13 22:51:00.950750 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 22:51:00.951158 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:51:00.975983 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:51:00.996909 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:51:01.017761 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 22:51:01.018218 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:51:01.039787 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 22:51:01.040213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 22:51:01.070837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 22:51:01.071306 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:51:01.091089 systemd[1]: Stopped target paths.target - Path Units. Jan 13 22:51:01.109733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 22:51:01.110230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:51:01.120161 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 22:51:01.140053 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 22:51:01.164867 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 22:51:01.165191 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:51:01.185917 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 22:51:01.186249 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:51:01.208945 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 22:51:01.322328 ignition[1205]: INFO : Ignition 2.19.0 Jan 13 22:51:01.322328 ignition[1205]: INFO : Stage: umount Jan 13 22:51:01.322328 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:51:01.322328 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 13 22:51:01.322328 ignition[1205]: INFO : umount: umount passed Jan 13 22:51:01.322328 ignition[1205]: INFO : POST message to Packet Timeline Jan 13 22:51:01.322328 ignition[1205]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 13 22:51:01.209374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:51:01.227965 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 22:51:01.228383 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 22:51:01.245962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 22:51:01.246380 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 22:51:01.279468 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 22:51:01.294952 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 22:51:01.304644 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 22:51:01.305054 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:51:01.333461 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 22:51:01.333604 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:51:01.369636 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 22:51:01.369738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 22:51:01.384662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 22:51:01.448245 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 22:51:01.448315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 22:51:01.936925 ignition[1205]: INFO : GET result: OK Jan 13 22:51:02.711842 ignition[1205]: INFO : Ignition finished successfully Jan 13 22:51:02.715062 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 22:51:02.715372 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 22:51:02.733427 systemd[1]: Stopped target network.target - Network. Jan 13 22:51:02.749447 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 22:51:02.749642 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 22:51:02.767521 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 22:51:02.767655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 22:51:02.785590 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 22:51:02.785748 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 22:51:02.804568 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 22:51:02.804734 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 22:51:02.823571 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 22:51:02.823738 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 22:51:02.842994 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 22:51:02.858330 systemd-networkd[918]: enp1s0f1np1: DHCPv6 lease lost Jan 13 22:51:02.860642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 22:51:02.872405 systemd-networkd[918]: enp1s0f0np0: DHCPv6 lease lost Jan 13 22:51:02.879561 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 22:51:02.879968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 22:51:02.898861 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 22:51:02.899320 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 22:51:02.919032 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 22:51:02.919278 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:51:02.954421 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 22:51:02.977339 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 22:51:02.977402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:51:02.996467 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 22:51:02.996557 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:51:03.016559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 22:51:03.016718 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 22:51:03.034558 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 22:51:03.034726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:51:03.054792 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:51:03.077461 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 22:51:03.077833 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:51:03.110720 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 22:51:03.110763 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 22:51:03.134431 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 22:51:03.134474 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:51:03.154357 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 22:51:03.154423 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:51:03.195397 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 22:51:03.195571 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 22:51:03.226579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:51:03.226726 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:51:03.280468 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 22:51:03.281422 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 22:51:03.281448 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:51:03.326326 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 22:51:03.326375 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:51:03.347369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 22:51:03.567349 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jan 13 22:51:03.347460 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:51:03.368593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:51:03.368734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:51:03.390780 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 22:51:03.391042 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 22:51:03.408474 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 22:51:03.408812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 22:51:03.430578 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 22:51:03.459733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 22:51:03.500102 systemd[1]: Switching root. Jan 13 22:51:03.661374 systemd-journald[267]: Journal stopped Jan 13 22:51:06.270976 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 22:51:06.270990 kernel: SELinux: policy capability open_perms=1 Jan 13 22:51:06.270997 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 22:51:06.271004 kernel: SELinux: policy capability always_check_network=0 Jan 13 22:51:06.271009 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 22:51:06.271014 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 22:51:06.271020 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 22:51:06.271025 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 22:51:06.271030 kernel: audit: type=1403 audit(1736808663.877:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 22:51:06.271037 systemd[1]: Successfully loaded SELinux policy in 160.667ms. Jan 13 22:51:06.271045 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.051ms. Jan 13 22:51:06.271051 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:51:06.271057 systemd[1]: Detected architecture x86-64. Jan 13 22:51:06.271063 systemd[1]: Detected first boot. Jan 13 22:51:06.271069 systemd[1]: Hostname set to . Jan 13 22:51:06.271077 systemd[1]: Initializing machine ID from random generator. Jan 13 22:51:06.271083 zram_generator::config[1257]: No configuration found. Jan 13 22:51:06.271091 systemd[1]: Populated /etc with preset unit settings. Jan 13 22:51:06.271097 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 22:51:06.271103 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 22:51:06.271109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 22:51:06.271116 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 22:51:06.271123 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 22:51:06.271129 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 22:51:06.271136 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 22:51:06.271142 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 22:51:06.271148 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 22:51:06.271155 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 22:51:06.271161 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 22:51:06.271171 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:51:06.271178 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:51:06.271184 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 22:51:06.271211 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 22:51:06.271218 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 22:51:06.271225 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:51:06.271244 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 13 22:51:06.271251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:51:06.271258 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 22:51:06.271265 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 22:51:06.271271 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 22:51:06.271279 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 22:51:06.271286 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:51:06.271293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:51:06.271299 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:51:06.271307 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:51:06.271313 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 22:51:06.271320 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 22:51:06.271326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:51:06.271333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:51:06.271339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:51:06.271347 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 22:51:06.271354 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 22:51:06.271361 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 22:51:06.271367 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 22:51:06.271374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:51:06.271380 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 22:51:06.271387 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 22:51:06.271395 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 22:51:06.271402 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 22:51:06.271409 systemd[1]: Reached target machines.target - Containers. Jan 13 22:51:06.271416 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 22:51:06.271423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:51:06.271429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:51:06.271436 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 22:51:06.271443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:51:06.271449 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:51:06.271457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:51:06.271464 kernel: ACPI: bus type drm_connector registered Jan 13 22:51:06.271470 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 22:51:06.271476 kernel: fuse: init (API version 7.39) Jan 13 22:51:06.271482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:51:06.271489 kernel: loop: module loaded Jan 13 22:51:06.271495 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 22:51:06.271502 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 22:51:06.271509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 22:51:06.271516 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 22:51:06.271522 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 22:51:06.271529 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:51:06.271543 systemd-journald[1360]: Collecting audit messages is disabled. Jan 13 22:51:06.271558 systemd-journald[1360]: Journal started Jan 13 22:51:06.271571 systemd-journald[1360]: Runtime Journal (/run/log/journal/49c9ca2b09db4285b45ed77fa7daadfe) is 8.0M, max 639.9M, 631.9M free. Jan 13 22:51:04.386595 systemd[1]: Queued start job for default target multi-user.target. Jan 13 22:51:04.401105 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Jan 13 22:51:04.401355 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 22:51:06.299218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:51:06.335218 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 22:51:06.378386 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 22:51:06.410201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:51:06.444670 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 22:51:06.444700 systemd[1]: Stopped verity-setup.service. Jan 13 22:51:06.507211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:51:06.528365 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:51:06.538756 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 22:51:06.548438 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 22:51:06.558446 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 22:51:06.568436 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 22:51:06.578406 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 22:51:06.588410 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 22:51:06.598538 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 22:51:06.609593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:51:06.620855 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 22:51:06.621070 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 22:51:06.633070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:51:06.633418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:51:06.645260 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:51:06.645638 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:51:06.657109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:51:06.657498 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:51:06.669103 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 22:51:06.669500 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 22:51:06.681103 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:51:06.681497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:51:06.692129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:51:06.704064 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 22:51:06.717070 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 22:51:06.730059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:51:06.750942 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 22:51:06.771426 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 22:51:06.782072 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 22:51:06.792378 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 22:51:06.792408 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:51:06.803456 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 22:51:06.832560 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 22:51:06.844371 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 22:51:06.854449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:51:06.855743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 22:51:06.866196 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 22:51:06.877378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:51:06.891775 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 22:51:06.894644 systemd-journald[1360]: Time spent on flushing to /var/log/journal/49c9ca2b09db4285b45ed77fa7daadfe is 14.613ms for 1371 entries. Jan 13 22:51:06.894644 systemd-journald[1360]: System Journal (/var/log/journal/49c9ca2b09db4285b45ed77fa7daadfe) is 8.0M, max 195.6M, 187.6M free. Jan 13 22:51:06.935622 systemd-journald[1360]: Received client request to flush runtime journal. Jan 13 22:51:06.909361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:51:06.910025 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:51:06.927962 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 22:51:06.940933 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:51:06.957911 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 22:51:06.964222 kernel: loop0: detected capacity change from 0 to 8 Jan 13 22:51:06.982224 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 22:51:06.987332 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 13 22:51:06.987342 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 13 22:51:06.990673 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 22:51:06.991213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 22:51:07.012440 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 22:51:07.023383 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 22:51:07.040389 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 22:51:07.050209 kernel: loop1: detected capacity change from 0 to 210664 Jan 13 22:51:07.060382 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:51:07.070392 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:51:07.084098 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 22:51:07.111383 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 22:51:07.123204 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 22:51:07.134866 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 22:51:07.145709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 22:51:07.146147 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 22:51:07.157893 udevadm[1396]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 22:51:07.164477 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 22:51:07.181419 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:51:07.202239 kernel: loop3: detected capacity change from 0 to 140768 Jan 13 22:51:07.207497 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Jan 13 22:51:07.207511 systemd-tmpfiles[1416]: ACLs are not supported, ignoring. Jan 13 22:51:07.211461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:51:07.225129 ldconfig[1386]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 22:51:07.226395 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 22:51:07.276180 kernel: loop4: detected capacity change from 0 to 8 Jan 13 22:51:07.296207 kernel: loop5: detected capacity change from 0 to 210664 Jan 13 22:51:07.317564 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 22:51:07.327174 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 22:51:07.357174 kernel: loop7: detected capacity change from 0 to 140768 Jan 13 22:51:07.362338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:51:07.368086 (sd-merge)[1421]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 13 22:51:07.368411 (sd-merge)[1421]: Merged extensions into '/usr'. Jan 13 22:51:07.374814 systemd-udevd[1423]: Using default interface naming scheme 'v255'. Jan 13 22:51:07.376390 systemd[1]: Reloading requested from client PID 1391 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 22:51:07.376399 systemd[1]: Reloading... Jan 13 22:51:07.412180 zram_generator::config[1449]: No configuration found. Jan 13 22:51:07.430861 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 13 22:51:07.430924 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1533) Jan 13 22:51:07.430947 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 22:51:07.479069 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 22:51:07.479119 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 22:51:07.497319 kernel: ACPI: button: Power Button [PWRF] Jan 13 22:51:07.550854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:51:07.558182 kernel: IPMI message handler: version 39.2 Jan 13 22:51:07.558234 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 13 22:51:07.583678 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 13 22:51:07.583918 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 13 22:51:07.661112 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 13 22:51:07.661260 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 13 22:51:07.614784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 13 22:51:07.641423 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 13 22:51:07.641574 systemd[1]: Reloading finished in 264 ms. Jan 13 22:51:07.694179 kernel: ipmi device interface Jan 13 22:51:07.694241 kernel: iTCO_vendor_support: vendor-support=0 Jan 13 22:51:07.734981 kernel: ipmi_si: IPMI System Interface driver Jan 13 22:51:07.735023 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 13 22:51:07.780068 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 13 22:51:07.780083 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 13 22:51:07.780092 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 13 22:51:07.849904 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 13 22:51:07.849993 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 13 22:51:07.850069 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 13 22:51:07.850083 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 13 22:51:07.850095 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 13 22:51:07.906619 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 13 22:51:07.906704 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 13 22:51:07.906786 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 13 22:51:07.974420 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:51:07.988216 kernel: intel_rapl_common: Found RAPL domain package Jan 13 22:51:07.988245 kernel: intel_rapl_common: Found RAPL domain core Jan 13 22:51:07.988257 kernel: intel_rapl_common: Found RAPL domain dram Jan 13 22:51:08.029437 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 22:51:08.056173 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 13 22:51:08.073174 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 13 22:51:08.088344 systemd[1]: Starting ensure-sysext.service... Jan 13 22:51:08.095815 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 22:51:08.120622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:51:08.130757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:51:08.131349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:51:08.131577 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 22:51:08.133246 systemd[1]: Reloading requested from client PID 1600 ('systemctl') (unit ensure-sysext.service)... Jan 13 22:51:08.133252 systemd[1]: Reloading... Jan 13 22:51:08.170175 zram_generator::config[1631]: No configuration found. Jan 13 22:51:08.188458 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 22:51:08.188662 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 22:51:08.189150 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 22:51:08.189323 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 13 22:51:08.189361 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 13 22:51:08.191079 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:51:08.191083 systemd-tmpfiles[1604]: Skipping /boot Jan 13 22:51:08.195194 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:51:08.195198 systemd-tmpfiles[1604]: Skipping /boot Jan 13 22:51:08.225277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:51:08.278366 systemd[1]: Reloading finished in 144 ms. Jan 13 22:51:08.310499 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 22:51:08.322451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:51:08.333418 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:51:08.355456 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 22:51:08.366742 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 22:51:08.373034 augenrules[1713]: No rules Jan 13 22:51:08.378006 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 22:51:08.389981 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 22:51:08.396986 lvm[1718]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:51:08.402499 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:51:08.413085 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 22:51:08.425184 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 22:51:08.434844 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 22:51:08.444514 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 22:51:08.456478 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 22:51:08.466534 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 22:51:08.477480 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 22:51:08.489763 systemd-networkd[1602]: lo: Link UP Jan 13 22:51:08.489766 systemd-networkd[1602]: lo: Gained carrier Jan 13 22:51:08.492419 systemd-networkd[1602]: bond0: netdev ready Jan 13 22:51:08.493431 systemd-networkd[1602]: Enumeration completed Jan 13 22:51:08.500411 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 22:51:08.501564 systemd-networkd[1602]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:16:48.network. Jan 13 22:51:08.511406 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:51:08.522799 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:51:08.530701 systemd-resolved[1720]: Positive Trust Anchors: Jan 13 22:51:08.530708 systemd-resolved[1720]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:51:08.530733 systemd-resolved[1720]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:51:08.532339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:51:08.532490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:51:08.533923 systemd-resolved[1720]: Using system hostname 'ci-4081.3.0-a-66cd838664'. Jan 13 22:51:08.544366 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 22:51:08.546520 lvm[1739]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:51:08.555882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:51:08.565876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:51:08.578102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:51:08.587385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:51:08.588469 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 22:51:08.600611 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 22:51:08.610338 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 22:51:08.610515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:51:08.612871 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 22:51:08.624639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:51:08.624710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:51:08.636581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:51:08.636652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:51:08.647617 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:51:08.647705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:51:08.662223 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 22:51:08.672291 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 13 22:51:08.693796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:51:08.694046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:51:08.695792 systemd-networkd[1602]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:16:49.network. Jan 13 22:51:08.696176 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 13 22:51:08.705437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:51:08.715872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:51:08.725833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:51:08.736834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:51:08.746331 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:51:08.746417 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 22:51:08.746472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:51:08.747083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:51:08.747156 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:51:08.759630 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:51:08.759704 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:51:08.769505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:51:08.769588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:51:08.780633 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:51:08.780730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:51:08.791860 systemd[1]: Finished ensure-sysext.service. Jan 13 22:51:08.809743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:51:08.809842 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:51:08.825530 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 22:51:08.862247 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 13 22:51:08.888563 systemd-networkd[1602]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 13 22:51:08.889226 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 13 22:51:08.890372 systemd-networkd[1602]: enp1s0f0np0: Link UP Jan 13 22:51:08.890777 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:51:08.890831 systemd-networkd[1602]: enp1s0f0np0: Gained carrier Jan 13 22:51:08.909465 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 13 22:51:08.919465 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 22:51:08.921526 systemd-networkd[1602]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:16:48.network. Jan 13 22:51:08.921670 systemd-networkd[1602]: enp1s0f1np1: Link UP Jan 13 22:51:08.921817 systemd-networkd[1602]: enp1s0f1np1: Gained carrier Jan 13 22:51:08.930424 systemd[1]: Reached target network.target - Network. Jan 13 22:51:08.932463 systemd-networkd[1602]: bond0: Link UP Jan 13 22:51:08.932624 systemd-networkd[1602]: bond0: Gained carrier Jan 13 22:51:08.932754 systemd-timesyncd[1759]: Network configuration changed, trying to establish connection. Jan 13 22:51:08.939288 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:51:08.950259 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:51:08.960309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 22:51:08.971267 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 22:51:08.982267 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 22:51:08.993242 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 22:51:08.993258 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:51:09.009257 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 22:51:09.013201 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 13 22:51:09.013222 kernel: bond0: active interface up! Jan 13 22:51:09.037319 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 22:51:09.047317 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 22:51:09.058248 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:51:09.066446 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 22:51:09.076916 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 22:51:09.086527 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 22:51:09.096547 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 22:51:09.106309 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:51:09.122416 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:51:09.133219 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 13 22:51:09.141268 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:51:09.141284 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:51:09.147254 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 22:51:09.156918 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 22:51:09.166796 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 22:51:09.175792 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 22:51:09.179133 coreos-metadata[1764]: Jan 13 22:51:09.179 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:51:09.185967 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 22:51:09.186610 dbus-daemon[1765]: [system] SELinux support is enabled Jan 13 22:51:09.187869 jq[1768]: false Jan 13 22:51:09.196261 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 22:51:09.196845 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 22:51:09.204001 extend-filesystems[1770]: Found loop4 Jan 13 22:51:09.204001 extend-filesystems[1770]: Found loop5 Jan 13 22:51:09.252113 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jan 13 22:51:09.252136 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1545) Jan 13 22:51:09.252152 extend-filesystems[1770]: Found loop6 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found loop7 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sda Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb1 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb2 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb3 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found usr Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb4 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb6 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb7 Jan 13 22:51:09.252152 extend-filesystems[1770]: Found sdb9 Jan 13 22:51:09.252152 extend-filesystems[1770]: Checking size of /dev/sdb9 Jan 13 22:51:09.252152 extend-filesystems[1770]: Resized partition /dev/sdb9 Jan 13 22:51:09.207913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 22:51:09.424343 extend-filesystems[1778]: resize2fs 1.47.1 (20-May-2024) Jan 13 22:51:09.252952 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 22:51:09.266035 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 22:51:09.311433 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 22:51:09.313106 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 13 22:51:09.326579 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 22:51:09.434629 update_engine[1795]: I20250113 22:51:09.363984 1795 main.cc:92] Flatcar Update Engine starting Jan 13 22:51:09.434629 update_engine[1795]: I20250113 22:51:09.364706 1795 update_check_scheduler.cc:74] Next update check in 11m36s Jan 13 22:51:09.326948 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 22:51:09.434797 jq[1796]: true Jan 13 22:51:09.348859 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 22:51:09.349282 systemd-logind[1790]: Watching system buttons on /dev/input/event3 (Power Button) Jan 13 22:51:09.349293 systemd-logind[1790]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 22:51:09.349302 systemd-logind[1790]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 13 22:51:09.349413 systemd-logind[1790]: New seat seat0. Jan 13 22:51:09.371591 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 22:51:09.393628 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 22:51:09.422317 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 22:51:09.422419 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 22:51:09.422571 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 22:51:09.422662 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 22:51:09.424668 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 22:51:09.424758 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 22:51:09.462043 jq[1799]: true Jan 13 22:51:09.462797 (ntainerd)[1800]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 22:51:09.467029 dbus-daemon[1765]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 22:51:09.467894 tar[1798]: linux-amd64/helm Jan 13 22:51:09.475626 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 13 22:51:09.475723 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 13 22:51:09.477854 systemd[1]: Started update-engine.service - Update Engine. Jan 13 22:51:09.488771 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 22:51:09.488873 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 22:51:09.501261 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 22:51:09.501346 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 22:51:09.514860 sshd_keygen[1794]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 22:51:09.524318 bash[1827]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:51:09.524339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 22:51:09.536722 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 22:51:09.547495 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 22:51:09.552116 locksmithd[1829]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 22:51:09.572450 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 22:51:09.582124 systemd[1]: Starting sshkeys.service... Jan 13 22:51:09.590516 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 22:51:09.606252 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 22:51:09.628407 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 22:51:09.638555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 22:51:09.640815 containerd[1800]: time="2025-01-13T22:51:09.640751230Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 22:51:09.652633 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 22:51:09.655578 containerd[1800]: time="2025-01-13T22:51:09.655524501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656387 containerd[1800]: time="2025-01-13T22:51:09.656340205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656387 containerd[1800]: time="2025-01-13T22:51:09.656356730Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 22:51:09.656387 containerd[1800]: time="2025-01-13T22:51:09.656369269Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 22:51:09.656505 containerd[1800]: time="2025-01-13T22:51:09.656465174Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 22:51:09.656505 containerd[1800]: time="2025-01-13T22:51:09.656476583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656551 containerd[1800]: time="2025-01-13T22:51:09.656509837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656551 containerd[1800]: time="2025-01-13T22:51:09.656518441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656646 containerd[1800]: time="2025-01-13T22:51:09.656607612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656646 containerd[1800]: time="2025-01-13T22:51:09.656616986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656646 containerd[1800]: time="2025-01-13T22:51:09.656624228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656646 containerd[1800]: time="2025-01-13T22:51:09.656629999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656717 containerd[1800]: time="2025-01-13T22:51:09.656670186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656904 containerd[1800]: time="2025-01-13T22:51:09.656871872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656939 containerd[1800]: time="2025-01-13T22:51:09.656930857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:51:09.656956 containerd[1800]: time="2025-01-13T22:51:09.656940119Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 22:51:09.656989 containerd[1800]: time="2025-01-13T22:51:09.656982706Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 22:51:09.657016 containerd[1800]: time="2025-01-13T22:51:09.657010102Z" level=info msg="metadata content store policy set" policy=shared Jan 13 22:51:09.668482 containerd[1800]: time="2025-01-13T22:51:09.668438129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 22:51:09.668482 containerd[1800]: time="2025-01-13T22:51:09.668464870Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 22:51:09.668482 containerd[1800]: time="2025-01-13T22:51:09.668475704Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 22:51:09.668553 containerd[1800]: time="2025-01-13T22:51:09.668484857Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 22:51:09.668553 containerd[1800]: time="2025-01-13T22:51:09.668492667Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 22:51:09.668581 containerd[1800]: time="2025-01-13T22:51:09.668567341Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 22:51:09.668758 containerd[1800]: time="2025-01-13T22:51:09.668716233Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 22:51:09.668836 containerd[1800]: time="2025-01-13T22:51:09.668788821Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 22:51:09.668836 containerd[1800]: time="2025-01-13T22:51:09.668799588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 22:51:09.668836 containerd[1800]: time="2025-01-13T22:51:09.668807633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 22:51:09.668836 containerd[1800]: time="2025-01-13T22:51:09.668815905Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668836 containerd[1800]: time="2025-01-13T22:51:09.668823343Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668836 containerd[1800]: time="2025-01-13T22:51:09.668830467Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668836 containerd[1800]: time="2025-01-13T22:51:09.668838041Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668845791Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668852902Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668860238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668866323Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668879745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668887753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668896210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668904330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668911267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668919111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668925395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668932203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.668942 containerd[1800]: time="2025-01-13T22:51:09.668939038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.668948951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.668956001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.668962548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.668969273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.668977456Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.668989350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.668995929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.669002173Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.669030154Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.669039892Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.669046891Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.669053832Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 22:51:09.669111 containerd[1800]: time="2025-01-13T22:51:09.669059130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669290 containerd[1800]: time="2025-01-13T22:51:09.669066237Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 22:51:09.669290 containerd[1800]: time="2025-01-13T22:51:09.669074249Z" level=info msg="NRI interface is disabled by configuration." Jan 13 22:51:09.669290 containerd[1800]: time="2025-01-13T22:51:09.669079996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 22:51:09.669495 containerd[1800]: time="2025-01-13T22:51:09.669426774Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 22:51:09.669597 containerd[1800]: time="2025-01-13T22:51:09.669504788Z" level=info msg="Connect containerd service" Jan 13 22:51:09.669624 containerd[1800]: time="2025-01-13T22:51:09.669616172Z" level=info msg="using legacy CRI server" Jan 13 22:51:09.669640 containerd[1800]: time="2025-01-13T22:51:09.669624479Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 22:51:09.669690 containerd[1800]: time="2025-01-13T22:51:09.669682717Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 22:51:09.669981 containerd[1800]: time="2025-01-13T22:51:09.669969941Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 22:51:09.670105 containerd[1800]: time="2025-01-13T22:51:09.670087066Z" level=info msg="Start subscribing containerd event" Jan 13 22:51:09.670123 containerd[1800]: time="2025-01-13T22:51:09.670115518Z" level=info msg="Start recovering state" Jan 13 22:51:09.670137 containerd[1800]: time="2025-01-13T22:51:09.670130856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 22:51:09.670162 containerd[1800]: time="2025-01-13T22:51:09.670155786Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 22:51:09.670187 containerd[1800]: time="2025-01-13T22:51:09.670157929Z" level=info msg="Start event monitor" Jan 13 22:51:09.670187 containerd[1800]: time="2025-01-13T22:51:09.670177878Z" level=info msg="Start snapshots syncer" Jan 13 22:51:09.670218 containerd[1800]: time="2025-01-13T22:51:09.670187988Z" level=info msg="Start cni network conf syncer for default" Jan 13 22:51:09.670218 containerd[1800]: time="2025-01-13T22:51:09.670195402Z" level=info msg="Start streaming server" Jan 13 22:51:09.670245 containerd[1800]: time="2025-01-13T22:51:09.670227564Z" level=info msg="containerd successfully booted in 0.029949s" Jan 13 22:51:09.674472 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 22:51:09.684965 coreos-metadata[1870]: Jan 13 22:51:09.684 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 13 22:51:09.686975 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 22:51:09.697053 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 13 22:51:09.706411 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 22:51:09.714676 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 22:51:09.744087 tar[1798]: linux-amd64/LICENSE Jan 13 22:51:09.744130 tar[1798]: linux-amd64/README.md Jan 13 22:51:09.747173 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jan 13 22:51:09.771852 extend-filesystems[1778]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jan 13 22:51:09.771852 extend-filesystems[1778]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 13 22:51:09.771852 extend-filesystems[1778]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jan 13 22:51:09.812257 extend-filesystems[1770]: Resized filesystem in /dev/sdb9 Jan 13 22:51:09.772265 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 22:51:09.772356 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 22:51:09.820485 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 22:51:10.129387 systemd-networkd[1602]: bond0: Gained IPv6LL Jan 13 22:51:10.706974 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 22:51:10.719573 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 22:51:10.738302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:10.748829 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 22:51:10.766900 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 22:51:11.402912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:11.415686 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:51:11.624467 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 13 22:51:11.624622 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 13 22:51:11.882917 kubelet[1902]: E0113 22:51:11.882811 1902 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:51:11.884436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:51:11.884518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:51:12.405670 systemd-timesyncd[1759]: Contacted time server 64.186.96.3:123 (0.flatcar.pool.ntp.org). Jan 13 22:51:12.405855 systemd-timesyncd[1759]: Initial clock synchronization to Mon 2025-01-13 22:51:12.710290 UTC. Jan 13 22:51:12.416649 coreos-metadata[1764]: Jan 13 22:51:12.416 INFO Fetch successful Jan 13 22:51:12.435135 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 22:51:12.450407 systemd[1]: Started sshd@0-147.28.180.253:22-139.178.89.65:57726.service - OpenSSH per-connection server daemon (139.178.89.65:57726). Jan 13 22:51:12.470170 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 22:51:12.481751 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 13 22:51:12.492343 sshd[1924]: Accepted publickey for core from 139.178.89.65 port 57726 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:12.493995 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:12.499743 systemd-logind[1790]: New session 1 of user core. Jan 13 22:51:12.500561 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 22:51:12.511200 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 22:51:12.539227 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 22:51:12.566531 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 22:51:12.578500 (systemd)[1934]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 22:51:12.663111 systemd[1934]: Queued start job for default target default.target. Jan 13 22:51:12.673846 systemd[1934]: Created slice app.slice - User Application Slice. Jan 13 22:51:12.673861 systemd[1934]: Reached target paths.target - Paths. Jan 13 22:51:12.673870 systemd[1934]: Reached target timers.target - Timers. Jan 13 22:51:12.674488 systemd[1934]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 22:51:12.679902 systemd[1934]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 22:51:12.679930 systemd[1934]: Reached target sockets.target - Sockets. Jan 13 22:51:12.679939 systemd[1934]: Reached target basic.target - Basic System. Jan 13 22:51:12.679959 systemd[1934]: Reached target default.target - Main User Target. Jan 13 22:51:12.679975 systemd[1934]: Startup finished in 94ms. Jan 13 22:51:12.680079 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 22:51:12.692214 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 22:51:12.761466 systemd[1]: Started sshd@1-147.28.180.253:22-139.178.89.65:57732.service - OpenSSH per-connection server daemon (139.178.89.65:57732). Jan 13 22:51:12.799558 sshd[1945]: Accepted publickey for core from 139.178.89.65 port 57732 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:12.800189 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:12.802677 systemd-logind[1790]: New session 2 of user core. Jan 13 22:51:12.803519 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 22:51:12.809897 coreos-metadata[1870]: Jan 13 22:51:12.809 INFO Fetch successful Jan 13 22:51:12.841611 unknown[1870]: wrote ssh authorized keys file for user: core Jan 13 22:51:12.859547 sshd[1945]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:12.861596 systemd[1]: sshd@1-147.28.180.253:22-139.178.89.65:57732.service: Deactivated successfully. Jan 13 22:51:12.862342 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 13 22:51:12.866919 update-ssh-keys[1948]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:51:12.873708 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 22:51:12.885681 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 22:51:12.885965 systemd[1]: Finished sshkeys.service. Jan 13 22:51:12.893787 systemd-logind[1790]: Session 2 logged out. Waiting for processes to exit. Jan 13 22:51:12.895269 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 22:51:12.917533 systemd[1]: Started sshd@2-147.28.180.253:22-139.178.89.65:57736.service - OpenSSH per-connection server daemon (139.178.89.65:57736). Jan 13 22:51:12.928512 systemd[1]: Startup finished in 2.659s (kernel) + 22.877s (initrd) + 9.210s (userspace) = 34.747s. Jan 13 22:51:12.929530 systemd-logind[1790]: Removed session 2. Jan 13 22:51:12.946097 sshd[1957]: Accepted publickey for core from 139.178.89.65 port 57736 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:12.946719 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:12.949792 systemd-logind[1790]: New session 3 of user core. Jan 13 22:51:12.959473 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 22:51:12.968801 login[1878]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:51:12.969173 login[1879]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:51:12.971192 systemd-logind[1790]: New session 5 of user core. Jan 13 22:51:12.972019 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 22:51:12.973363 systemd-logind[1790]: New session 4 of user core. Jan 13 22:51:12.974086 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 22:51:13.006866 sshd[1957]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:13.008356 systemd[1]: sshd@2-147.28.180.253:22-139.178.89.65:57736.service: Deactivated successfully. Jan 13 22:51:13.009257 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 22:51:13.010015 systemd-logind[1790]: Session 3 logged out. Waiting for processes to exit. Jan 13 22:51:13.010806 systemd-logind[1790]: Removed session 3. Jan 13 22:51:21.896326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 22:51:21.914441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:22.148736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:22.151330 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:51:22.176992 kubelet[1999]: E0113 22:51:22.176969 1999 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:51:22.179050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:51:22.179132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:51:23.240648 systemd[1]: Started sshd@3-147.28.180.253:22-139.178.89.65:34668.service - OpenSSH per-connection server daemon (139.178.89.65:34668). Jan 13 22:51:23.273039 sshd[2019]: Accepted publickey for core from 139.178.89.65 port 34668 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:23.273879 sshd[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:23.276971 systemd-logind[1790]: New session 6 of user core. Jan 13 22:51:23.287435 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 22:51:23.341406 sshd[2019]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:23.349772 systemd[1]: sshd@3-147.28.180.253:22-139.178.89.65:34668.service: Deactivated successfully. Jan 13 22:51:23.350494 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 22:51:23.351163 systemd-logind[1790]: Session 6 logged out. Waiting for processes to exit. Jan 13 22:51:23.351974 systemd[1]: Started sshd@4-147.28.180.253:22-139.178.89.65:34676.service - OpenSSH per-connection server daemon (139.178.89.65:34676). Jan 13 22:51:23.352416 systemd-logind[1790]: Removed session 6. Jan 13 22:51:23.383768 sshd[2026]: Accepted publickey for core from 139.178.89.65 port 34676 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:23.385049 sshd[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:23.389843 systemd-logind[1790]: New session 7 of user core. Jan 13 22:51:23.403814 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 22:51:23.461955 sshd[2026]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:23.476867 systemd[1]: sshd@4-147.28.180.253:22-139.178.89.65:34676.service: Deactivated successfully. Jan 13 22:51:23.477653 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 22:51:23.478421 systemd-logind[1790]: Session 7 logged out. Waiting for processes to exit. Jan 13 22:51:23.479128 systemd[1]: Started sshd@5-147.28.180.253:22-139.178.89.65:34678.service - OpenSSH per-connection server daemon (139.178.89.65:34678). Jan 13 22:51:23.479776 systemd-logind[1790]: Removed session 7. Jan 13 22:51:23.510114 sshd[2033]: Accepted publickey for core from 139.178.89.65 port 34678 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:23.511266 sshd[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:23.514372 systemd-logind[1790]: New session 8 of user core. Jan 13 22:51:23.523433 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 22:51:23.578311 sshd[2033]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:23.589898 systemd[1]: sshd@5-147.28.180.253:22-139.178.89.65:34678.service: Deactivated successfully. Jan 13 22:51:23.590706 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 22:51:23.591493 systemd-logind[1790]: Session 8 logged out. Waiting for processes to exit. Jan 13 22:51:23.592227 systemd[1]: Started sshd@6-147.28.180.253:22-139.178.89.65:34688.service - OpenSSH per-connection server daemon (139.178.89.65:34688). Jan 13 22:51:23.592841 systemd-logind[1790]: Removed session 8. Jan 13 22:51:23.625748 sshd[2040]: Accepted publickey for core from 139.178.89.65 port 34688 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:23.627431 sshd[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:23.634997 systemd-logind[1790]: New session 9 of user core. Jan 13 22:51:23.647649 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 22:51:23.716086 sudo[2043]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 22:51:23.716238 sudo[2043]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:51:23.732901 sudo[2043]: pam_unix(sudo:session): session closed for user root Jan 13 22:51:23.733894 sshd[2040]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:23.754476 systemd[1]: sshd@6-147.28.180.253:22-139.178.89.65:34688.service: Deactivated successfully. Jan 13 22:51:23.755742 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 22:51:23.756916 systemd-logind[1790]: Session 9 logged out. Waiting for processes to exit. Jan 13 22:51:23.758049 systemd[1]: Started sshd@7-147.28.180.253:22-139.178.89.65:34694.service - OpenSSH per-connection server daemon (139.178.89.65:34694). Jan 13 22:51:23.758986 systemd-logind[1790]: Removed session 9. Jan 13 22:51:23.790681 sshd[2048]: Accepted publickey for core from 139.178.89.65 port 34694 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:23.791572 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:23.794928 systemd-logind[1790]: New session 10 of user core. Jan 13 22:51:23.804415 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 22:51:23.869441 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 22:51:23.870318 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:51:23.879104 sudo[2052]: pam_unix(sudo:session): session closed for user root Jan 13 22:51:23.892996 sudo[2051]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 22:51:23.893827 sudo[2051]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:51:23.933949 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 22:51:23.937312 auditctl[2055]: No rules Jan 13 22:51:23.938205 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 22:51:23.938681 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 22:51:23.944478 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 22:51:23.991439 augenrules[2073]: No rules Jan 13 22:51:23.992322 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 22:51:23.993584 sudo[2051]: pam_unix(sudo:session): session closed for user root Jan 13 22:51:23.995512 sshd[2048]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:24.022995 systemd[1]: sshd@7-147.28.180.253:22-139.178.89.65:34694.service: Deactivated successfully. Jan 13 22:51:24.026581 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 22:51:24.030021 systemd-logind[1790]: Session 10 logged out. Waiting for processes to exit. Jan 13 22:51:24.049136 systemd[1]: Started sshd@8-147.28.180.253:22-139.178.89.65:34706.service - OpenSSH per-connection server daemon (139.178.89.65:34706). Jan 13 22:51:24.051719 systemd-logind[1790]: Removed session 10. Jan 13 22:51:24.106239 sshd[2081]: Accepted publickey for core from 139.178.89.65 port 34706 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:51:24.107972 sshd[2081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:51:24.114149 systemd-logind[1790]: New session 11 of user core. Jan 13 22:51:24.132874 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 22:51:24.202619 sudo[2084]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 22:51:24.203518 sudo[2084]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:51:24.592576 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 22:51:24.592630 (dockerd)[2108]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 22:51:24.924638 dockerd[2108]: time="2025-01-13T22:51:24.924569208Z" level=info msg="Starting up" Jan 13 22:51:24.992250 dockerd[2108]: time="2025-01-13T22:51:24.992230624Z" level=info msg="Loading containers: start." Jan 13 22:51:25.063231 kernel: Initializing XFRM netlink socket Jan 13 22:51:25.176551 systemd-networkd[1602]: docker0: Link UP Jan 13 22:51:25.188151 dockerd[2108]: time="2025-01-13T22:51:25.188132490Z" level=info msg="Loading containers: done." Jan 13 22:51:25.195414 dockerd[2108]: time="2025-01-13T22:51:25.195367751Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 22:51:25.195481 dockerd[2108]: time="2025-01-13T22:51:25.195418488Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 22:51:25.195481 dockerd[2108]: time="2025-01-13T22:51:25.195469525Z" level=info msg="Daemon has completed initialization" Jan 13 22:51:25.210035 dockerd[2108]: time="2025-01-13T22:51:25.209980916Z" level=info msg="API listen on /run/docker.sock" Jan 13 22:51:25.210080 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 22:51:26.291267 containerd[1800]: time="2025-01-13T22:51:26.291244029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 22:51:27.047017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452697588.mount: Deactivated successfully. Jan 13 22:51:27.909122 containerd[1800]: time="2025-01-13T22:51:27.909066418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:27.909340 containerd[1800]: time="2025-01-13T22:51:27.909247709Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 22:51:27.909671 containerd[1800]: time="2025-01-13T22:51:27.909633856Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:27.911157 containerd[1800]: time="2025-01-13T22:51:27.911122169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:27.911741 containerd[1800]: time="2025-01-13T22:51:27.911696920Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.620430545s" Jan 13 22:51:27.911741 containerd[1800]: time="2025-01-13T22:51:27.911715435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 22:51:27.922738 containerd[1800]: time="2025-01-13T22:51:27.922690621Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 22:51:29.099693 containerd[1800]: time="2025-01-13T22:51:29.099667468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:29.099945 containerd[1800]: time="2025-01-13T22:51:29.099922726Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 22:51:29.101023 containerd[1800]: time="2025-01-13T22:51:29.100987418Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:29.102687 containerd[1800]: time="2025-01-13T22:51:29.102645910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:29.103804 containerd[1800]: time="2025-01-13T22:51:29.103759468Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.18104758s" Jan 13 22:51:29.103804 containerd[1800]: time="2025-01-13T22:51:29.103776827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 22:51:29.115002 containerd[1800]: time="2025-01-13T22:51:29.114983495Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 22:51:29.976533 containerd[1800]: time="2025-01-13T22:51:29.976507498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:29.976753 containerd[1800]: time="2025-01-13T22:51:29.976734159Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 22:51:29.977071 containerd[1800]: time="2025-01-13T22:51:29.977061367Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:29.978639 containerd[1800]: time="2025-01-13T22:51:29.978596340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:29.979227 containerd[1800]: time="2025-01-13T22:51:29.979208528Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 864.204738ms" Jan 13 22:51:29.979254 containerd[1800]: time="2025-01-13T22:51:29.979224683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 22:51:29.990263 containerd[1800]: time="2025-01-13T22:51:29.990233461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 22:51:30.805310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556627356.mount: Deactivated successfully. Jan 13 22:51:30.974683 containerd[1800]: time="2025-01-13T22:51:30.974626104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:30.974889 containerd[1800]: time="2025-01-13T22:51:30.974830943Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 22:51:30.975207 containerd[1800]: time="2025-01-13T22:51:30.975192900Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:30.976115 containerd[1800]: time="2025-01-13T22:51:30.976075386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:30.976519 containerd[1800]: time="2025-01-13T22:51:30.976477777Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 986.213953ms" Jan 13 22:51:30.976519 containerd[1800]: time="2025-01-13T22:51:30.976493945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 22:51:30.987603 containerd[1800]: time="2025-01-13T22:51:30.987555501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 22:51:31.546940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138905761.mount: Deactivated successfully. Jan 13 22:51:32.049433 containerd[1800]: time="2025-01-13T22:51:32.049382047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:32.049644 containerd[1800]: time="2025-01-13T22:51:32.049595641Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 22:51:32.050006 containerd[1800]: time="2025-01-13T22:51:32.049969167Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:32.069629 containerd[1800]: time="2025-01-13T22:51:32.069585621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:32.070194 containerd[1800]: time="2025-01-13T22:51:32.070150718Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.082575276s" Jan 13 22:51:32.070194 containerd[1800]: time="2025-01-13T22:51:32.070167653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 22:51:32.081167 containerd[1800]: time="2025-01-13T22:51:32.081148404Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 22:51:32.391920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 22:51:32.403494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:32.618149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:32.620477 (kubelet)[2476]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:51:32.642985 kubelet[2476]: E0113 22:51:32.642871 2476 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:51:32.644013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:51:32.644102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:51:32.671706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820913774.mount: Deactivated successfully. Jan 13 22:51:32.673302 containerd[1800]: time="2025-01-13T22:51:32.673244414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:32.673530 containerd[1800]: time="2025-01-13T22:51:32.673492791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 22:51:32.673934 containerd[1800]: time="2025-01-13T22:51:32.673877726Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:32.675333 containerd[1800]: time="2025-01-13T22:51:32.675290012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:32.676226 containerd[1800]: time="2025-01-13T22:51:32.676187371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 595.018498ms" Jan 13 22:51:32.676279 containerd[1800]: time="2025-01-13T22:51:32.676230238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 22:51:32.688702 containerd[1800]: time="2025-01-13T22:51:32.688653539Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 22:51:33.203566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386863184.mount: Deactivated successfully. Jan 13 22:51:34.323887 containerd[1800]: time="2025-01-13T22:51:34.323861636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:34.324097 containerd[1800]: time="2025-01-13T22:51:34.324072035Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 22:51:34.324499 containerd[1800]: time="2025-01-13T22:51:34.324486706Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:34.326425 containerd[1800]: time="2025-01-13T22:51:34.326382174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:51:34.326952 containerd[1800]: time="2025-01-13T22:51:34.326935000Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.638261077s" Jan 13 22:51:34.326991 containerd[1800]: time="2025-01-13T22:51:34.326952966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 22:51:36.390317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:36.403551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:36.415941 systemd[1]: Reloading requested from client PID 2705 ('systemctl') (unit session-11.scope)... Jan 13 22:51:36.415949 systemd[1]: Reloading... Jan 13 22:51:36.459242 zram_generator::config[2744]: No configuration found. Jan 13 22:51:36.529037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:51:36.589084 systemd[1]: Reloading finished in 172 ms. Jan 13 22:51:36.624821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:36.626369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:36.627455 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 22:51:36.627550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:36.628595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:36.838923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:36.841350 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:51:36.864425 kubelet[2814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:51:36.864425 kubelet[2814]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:51:36.864425 kubelet[2814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:51:36.864645 kubelet[2814]: I0113 22:51:36.864443 2814 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:51:37.181025 kubelet[2814]: I0113 22:51:37.180975 2814 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 22:51:37.181025 kubelet[2814]: I0113 22:51:37.180990 2814 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:51:37.181132 kubelet[2814]: I0113 22:51:37.181117 2814 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 22:51:37.198130 kubelet[2814]: I0113 22:51:37.198090 2814 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:51:37.199056 kubelet[2814]: E0113 22:51:37.199018 2814 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.180.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.212819 kubelet[2814]: I0113 22:51:37.212768 2814 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:51:37.214604 kubelet[2814]: I0113 22:51:37.214561 2814 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:51:37.214699 kubelet[2814]: I0113 22:51:37.214577 2814 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-66cd838664","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 22:51:37.215020 kubelet[2814]: I0113 22:51:37.214984 2814 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:51:37.215020 kubelet[2814]: I0113 22:51:37.214993 2814 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 22:51:37.215069 kubelet[2814]: I0113 22:51:37.215049 2814 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:51:37.215748 kubelet[2814]: I0113 22:51:37.215712 2814 kubelet.go:400] "Attempting to sync node with API server" Jan 13 22:51:37.215748 kubelet[2814]: I0113 22:51:37.215722 2814 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:51:37.215748 kubelet[2814]: I0113 22:51:37.215733 2814 kubelet.go:312] "Adding apiserver pod source" Jan 13 22:51:37.215748 kubelet[2814]: I0113 22:51:37.215740 2814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:51:37.218503 kubelet[2814]: W0113 22:51:37.218437 2814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-66cd838664&limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.218539 kubelet[2814]: E0113 22:51:37.218516 2814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-66cd838664&limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.218734 kubelet[2814]: W0113 22:51:37.218703 2814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.218777 kubelet[2814]: E0113 22:51:37.218743 2814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.219757 kubelet[2814]: I0113 22:51:37.219719 2814 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:51:37.221296 kubelet[2814]: I0113 22:51:37.221259 2814 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:51:37.221296 kubelet[2814]: W0113 22:51:37.221289 2814 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 22:51:37.221628 kubelet[2814]: I0113 22:51:37.221566 2814 server.go:1264] "Started kubelet" Jan 13 22:51:37.221691 kubelet[2814]: I0113 22:51:37.221663 2814 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:51:37.221726 kubelet[2814]: I0113 22:51:37.221663 2814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:51:37.221835 kubelet[2814]: I0113 22:51:37.221827 2814 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:51:37.222359 kubelet[2814]: I0113 22:51:37.222350 2814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:51:37.222396 kubelet[2814]: I0113 22:51:37.222383 2814 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 22:51:37.222396 kubelet[2814]: E0113 22:51:37.222391 2814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-66cd838664\" not found" Jan 13 22:51:37.222477 kubelet[2814]: I0113 22:51:37.222401 2814 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 22:51:37.222477 kubelet[2814]: I0113 22:51:37.222439 2814 reconciler.go:26] "Reconciler: start to sync state" Jan 13 22:51:37.222541 kubelet[2814]: E0113 22:51:37.222522 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-66cd838664?timeout=10s\": dial tcp 147.28.180.253:6443: connect: connection refused" interval="200ms" Jan 13 22:51:37.222571 kubelet[2814]: I0113 22:51:37.222564 2814 server.go:455] "Adding debug handlers to kubelet server" Jan 13 22:51:37.222627 kubelet[2814]: W0113 22:51:37.222602 2814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.222656 kubelet[2814]: E0113 22:51:37.222637 2814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.222768 kubelet[2814]: I0113 22:51:37.222760 2814 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:51:37.222812 kubelet[2814]: I0113 22:51:37.222803 2814 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:51:37.223230 kubelet[2814]: I0113 22:51:37.223221 2814 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:51:37.223582 kubelet[2814]: E0113 22:51:37.223571 2814 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:51:37.227285 kubelet[2814]: E0113 22:51:37.227189 2814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.253:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-66cd838664.181a624ee0b32721 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-66cd838664,UID:ci-4081.3.0-a-66cd838664,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-66cd838664,},FirstTimestamp:2025-01-13 22:51:37.221556001 +0000 UTC m=+0.378362081,LastTimestamp:2025-01-13 22:51:37.221556001 +0000 UTC m=+0.378362081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-66cd838664,}" Jan 13 22:51:37.232307 kubelet[2814]: I0113 22:51:37.232259 2814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:51:37.232906 kubelet[2814]: I0113 22:51:37.232873 2814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:51:37.232906 kubelet[2814]: I0113 22:51:37.232906 2814 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:51:37.232974 kubelet[2814]: I0113 22:51:37.232933 2814 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 22:51:37.232974 kubelet[2814]: E0113 22:51:37.232954 2814 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:51:37.233189 kubelet[2814]: W0113 22:51:37.233156 2814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.233219 kubelet[2814]: E0113 22:51:37.233192 2814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:37.333698 kubelet[2814]: E0113 22:51:37.333559 2814 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 22:51:37.390495 kubelet[2814]: I0113 22:51:37.390423 2814 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.391265 kubelet[2814]: E0113 22:51:37.391156 2814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.253:6443/api/v1/nodes\": dial tcp 147.28.180.253:6443: connect: connection refused" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.391801 kubelet[2814]: I0113 22:51:37.391743 2814 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:51:37.391801 kubelet[2814]: I0113 22:51:37.391792 2814 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:51:37.392090 kubelet[2814]: I0113 22:51:37.391884 2814 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:51:37.394049 kubelet[2814]: I0113 22:51:37.394005 2814 policy_none.go:49] "None policy: Start" Jan 13 22:51:37.394623 kubelet[2814]: I0113 22:51:37.394575 2814 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:51:37.394623 kubelet[2814]: I0113 22:51:37.394597 2814 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:51:37.397548 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 22:51:37.410732 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 22:51:37.412501 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 22:51:37.423336 kubelet[2814]: E0113 22:51:37.423286 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-66cd838664?timeout=10s\": dial tcp 147.28.180.253:6443: connect: connection refused" interval="400ms" Jan 13 22:51:37.425737 kubelet[2814]: I0113 22:51:37.425701 2814 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:51:37.425855 kubelet[2814]: I0113 22:51:37.425790 2814 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 22:51:37.425899 kubelet[2814]: I0113 22:51:37.425895 2814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:51:37.426527 kubelet[2814]: E0113 22:51:37.426492 2814 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-66cd838664\" not found" Jan 13 22:51:37.534465 kubelet[2814]: I0113 22:51:37.534354 2814 topology_manager.go:215] "Topology Admit Handler" podUID="52af6927de5c5caeb7cb4f7bbb39d9da" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.535578 kubelet[2814]: I0113 22:51:37.535562 2814 topology_manager.go:215] "Topology Admit Handler" podUID="4996540e37e586f21a725f37d94d1bcc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.536497 kubelet[2814]: I0113 22:51:37.536485 2814 topology_manager.go:215] "Topology Admit Handler" podUID="cae6e83a88a26e166eb85214d57d8de0" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.540121 systemd[1]: Created slice kubepods-burstable-pod52af6927de5c5caeb7cb4f7bbb39d9da.slice - libcontainer container kubepods-burstable-pod52af6927de5c5caeb7cb4f7bbb39d9da.slice. Jan 13 22:51:37.558277 systemd[1]: Created slice kubepods-burstable-pod4996540e37e586f21a725f37d94d1bcc.slice - libcontainer container kubepods-burstable-pod4996540e37e586f21a725f37d94d1bcc.slice. Jan 13 22:51:37.571425 systemd[1]: Created slice kubepods-burstable-podcae6e83a88a26e166eb85214d57d8de0.slice - libcontainer container kubepods-burstable-podcae6e83a88a26e166eb85214d57d8de0.slice. Jan 13 22:51:37.592173 kubelet[2814]: I0113 22:51:37.592135 2814 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.592407 kubelet[2814]: E0113 22:51:37.592360 2814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.253:6443/api/v1/nodes\": dial tcp 147.28.180.253:6443: connect: connection refused" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623821 kubelet[2814]: I0113 22:51:37.623768 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52af6927de5c5caeb7cb4f7bbb39d9da-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-66cd838664\" (UID: \"52af6927de5c5caeb7cb4f7bbb39d9da\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623821 kubelet[2814]: I0113 22:51:37.623791 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52af6927de5c5caeb7cb4f7bbb39d9da-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-66cd838664\" (UID: \"52af6927de5c5caeb7cb4f7bbb39d9da\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623821 kubelet[2814]: I0113 22:51:37.623807 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623821 kubelet[2814]: I0113 22:51:37.623819 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623821 kubelet[2814]: I0113 22:51:37.623829 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623965 kubelet[2814]: I0113 22:51:37.623838 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623965 kubelet[2814]: I0113 22:51:37.623849 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cae6e83a88a26e166eb85214d57d8de0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-66cd838664\" (UID: \"cae6e83a88a26e166eb85214d57d8de0\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623965 kubelet[2814]: I0113 22:51:37.623859 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52af6927de5c5caeb7cb4f7bbb39d9da-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-66cd838664\" (UID: \"52af6927de5c5caeb7cb4f7bbb39d9da\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.623965 kubelet[2814]: I0113 22:51:37.623868 2814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.824564 kubelet[2814]: E0113 22:51:37.824440 2814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-66cd838664?timeout=10s\": dial tcp 147.28.180.253:6443: connect: connection refused" interval="800ms" Jan 13 22:51:37.858572 containerd[1800]: time="2025-01-13T22:51:37.858523610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-66cd838664,Uid:52af6927de5c5caeb7cb4f7bbb39d9da,Namespace:kube-system,Attempt:0,}" Jan 13 22:51:37.871221 containerd[1800]: time="2025-01-13T22:51:37.871193481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-66cd838664,Uid:4996540e37e586f21a725f37d94d1bcc,Namespace:kube-system,Attempt:0,}" Jan 13 22:51:37.873661 containerd[1800]: time="2025-01-13T22:51:37.873627410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-66cd838664,Uid:cae6e83a88a26e166eb85214d57d8de0,Namespace:kube-system,Attempt:0,}" Jan 13 22:51:37.997673 kubelet[2814]: I0113 22:51:37.997573 2814 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:37.998503 kubelet[2814]: E0113 22:51:37.998341 2814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.253:6443/api/v1/nodes\": dial tcp 147.28.180.253:6443: connect: connection refused" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:38.048673 kubelet[2814]: W0113 22:51:38.048619 2814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:38.048673 kubelet[2814]: E0113 22:51:38.048646 2814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:38.206123 kubelet[2814]: W0113 22:51:38.206052 2814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:38.206123 kubelet[2814]: E0113 22:51:38.206100 2814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.253:6443: connect: connection refused Jan 13 22:51:38.361864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275979384.mount: Deactivated successfully. Jan 13 22:51:38.363792 containerd[1800]: time="2025-01-13T22:51:38.363773615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:51:38.364033 containerd[1800]: time="2025-01-13T22:51:38.364012889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:51:38.364300 containerd[1800]: time="2025-01-13T22:51:38.364289035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:51:38.364688 containerd[1800]: time="2025-01-13T22:51:38.364675397Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:51:38.365041 containerd[1800]: time="2025-01-13T22:51:38.365030404Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:51:38.365101 containerd[1800]: time="2025-01-13T22:51:38.365084877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 22:51:38.365376 containerd[1800]: time="2025-01-13T22:51:38.365362694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:51:38.366760 containerd[1800]: time="2025-01-13T22:51:38.366733258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:51:38.368596 containerd[1800]: time="2025-01-13T22:51:38.368554222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.303985ms" Jan 13 22:51:38.369256 containerd[1800]: time="2025-01-13T22:51:38.369223887Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.62151ms" Jan 13 22:51:38.370576 containerd[1800]: time="2025-01-13T22:51:38.370560200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.904299ms" Jan 13 22:51:38.456642 containerd[1800]: time="2025-01-13T22:51:38.456524226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:51:38.456787 containerd[1800]: time="2025-01-13T22:51:38.456755445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:51:38.456787 containerd[1800]: time="2025-01-13T22:51:38.456781438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:51:38.456849 containerd[1800]: time="2025-01-13T22:51:38.456788484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:38.456849 containerd[1800]: time="2025-01-13T22:51:38.456577874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:51:38.456849 containerd[1800]: time="2025-01-13T22:51:38.456819107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:38.456849 containerd[1800]: time="2025-01-13T22:51:38.456843189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:38.456929 containerd[1800]: time="2025-01-13T22:51:38.456849821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:51:38.456929 containerd[1800]: time="2025-01-13T22:51:38.456867371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:38.456929 containerd[1800]: time="2025-01-13T22:51:38.456876333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:51:38.456929 containerd[1800]: time="2025-01-13T22:51:38.456884911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:38.457013 containerd[1800]: time="2025-01-13T22:51:38.456929609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:38.498610 systemd[1]: Started cri-containerd-631369a3772cc0d5b8cdd608a36d15c9e1b5f748fc4fef04a316b5acd4d912a0.scope - libcontainer container 631369a3772cc0d5b8cdd608a36d15c9e1b5f748fc4fef04a316b5acd4d912a0. Jan 13 22:51:38.499943 systemd[1]: Started cri-containerd-6c09b26fb9b05ffa3d86aa21760a82bd953995cca43520afc9e4dd6aa338e2c5.scope - libcontainer container 6c09b26fb9b05ffa3d86aa21760a82bd953995cca43520afc9e4dd6aa338e2c5. Jan 13 22:51:38.500835 systemd[1]: Started cri-containerd-7d48572f35dec47b19dc415b3f6ecb8d22d3b39d2e24f292cb036a4a60489c2f.scope - libcontainer container 7d48572f35dec47b19dc415b3f6ecb8d22d3b39d2e24f292cb036a4a60489c2f. Jan 13 22:51:38.535751 containerd[1800]: time="2025-01-13T22:51:38.535718633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-66cd838664,Uid:cae6e83a88a26e166eb85214d57d8de0,Namespace:kube-system,Attempt:0,} returns sandbox id \"631369a3772cc0d5b8cdd608a36d15c9e1b5f748fc4fef04a316b5acd4d912a0\"" Jan 13 22:51:38.537298 containerd[1800]: time="2025-01-13T22:51:38.537276408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-66cd838664,Uid:4996540e37e586f21a725f37d94d1bcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c09b26fb9b05ffa3d86aa21760a82bd953995cca43520afc9e4dd6aa338e2c5\"" Jan 13 22:51:38.538282 containerd[1800]: time="2025-01-13T22:51:38.538261846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-66cd838664,Uid:52af6927de5c5caeb7cb4f7bbb39d9da,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d48572f35dec47b19dc415b3f6ecb8d22d3b39d2e24f292cb036a4a60489c2f\"" Jan 13 22:51:38.538459 containerd[1800]: time="2025-01-13T22:51:38.538442717Z" level=info msg="CreateContainer within sandbox \"631369a3772cc0d5b8cdd608a36d15c9e1b5f748fc4fef04a316b5acd4d912a0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 22:51:38.538848 containerd[1800]: time="2025-01-13T22:51:38.538832329Z" level=info msg="CreateContainer within sandbox \"6c09b26fb9b05ffa3d86aa21760a82bd953995cca43520afc9e4dd6aa338e2c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 22:51:38.539729 containerd[1800]: time="2025-01-13T22:51:38.539717788Z" level=info msg="CreateContainer within sandbox \"7d48572f35dec47b19dc415b3f6ecb8d22d3b39d2e24f292cb036a4a60489c2f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 22:51:38.545842 containerd[1800]: time="2025-01-13T22:51:38.545826220Z" level=info msg="CreateContainer within sandbox \"631369a3772cc0d5b8cdd608a36d15c9e1b5f748fc4fef04a316b5acd4d912a0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c15ce2d8e91b3e2c3d1dc9e9aadc84cff5b589d7d18c3a9b451a9d73bee4fbb\"" Jan 13 22:51:38.546102 containerd[1800]: time="2025-01-13T22:51:38.546073074Z" level=info msg="StartContainer for \"4c15ce2d8e91b3e2c3d1dc9e9aadc84cff5b589d7d18c3a9b451a9d73bee4fbb\"" Jan 13 22:51:38.546743 containerd[1800]: time="2025-01-13T22:51:38.546680145Z" level=info msg="CreateContainer within sandbox \"7d48572f35dec47b19dc415b3f6ecb8d22d3b39d2e24f292cb036a4a60489c2f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0e017477f1af2f57434232f409944600386ce0688914465f118f29893b38cb8\"" Jan 13 22:51:38.546861 containerd[1800]: time="2025-01-13T22:51:38.546851621Z" level=info msg="StartContainer for \"a0e017477f1af2f57434232f409944600386ce0688914465f118f29893b38cb8\"" Jan 13 22:51:38.547042 containerd[1800]: time="2025-01-13T22:51:38.547028662Z" level=info msg="CreateContainer within sandbox \"6c09b26fb9b05ffa3d86aa21760a82bd953995cca43520afc9e4dd6aa338e2c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9397300fb1de3f72d957cde55bfc0363336a5549cb84f20b5a8ceccdb3fa8cae\"" Jan 13 22:51:38.547200 containerd[1800]: time="2025-01-13T22:51:38.547188701Z" level=info msg="StartContainer for \"9397300fb1de3f72d957cde55bfc0363336a5549cb84f20b5a8ceccdb3fa8cae\"" Jan 13 22:51:38.566355 systemd[1]: Started cri-containerd-4c15ce2d8e91b3e2c3d1dc9e9aadc84cff5b589d7d18c3a9b451a9d73bee4fbb.scope - libcontainer container 4c15ce2d8e91b3e2c3d1dc9e9aadc84cff5b589d7d18c3a9b451a9d73bee4fbb. Jan 13 22:51:38.566945 systemd[1]: Started cri-containerd-9397300fb1de3f72d957cde55bfc0363336a5549cb84f20b5a8ceccdb3fa8cae.scope - libcontainer container 9397300fb1de3f72d957cde55bfc0363336a5549cb84f20b5a8ceccdb3fa8cae. Jan 13 22:51:38.567529 systemd[1]: Started cri-containerd-a0e017477f1af2f57434232f409944600386ce0688914465f118f29893b38cb8.scope - libcontainer container a0e017477f1af2f57434232f409944600386ce0688914465f118f29893b38cb8. Jan 13 22:51:38.591662 containerd[1800]: time="2025-01-13T22:51:38.591638634Z" level=info msg="StartContainer for \"9397300fb1de3f72d957cde55bfc0363336a5549cb84f20b5a8ceccdb3fa8cae\" returns successfully" Jan 13 22:51:38.591739 containerd[1800]: time="2025-01-13T22:51:38.591638708Z" level=info msg="StartContainer for \"4c15ce2d8e91b3e2c3d1dc9e9aadc84cff5b589d7d18c3a9b451a9d73bee4fbb\" returns successfully" Jan 13 22:51:38.593060 containerd[1800]: time="2025-01-13T22:51:38.593040780Z" level=info msg="StartContainer for \"a0e017477f1af2f57434232f409944600386ce0688914465f118f29893b38cb8\" returns successfully" Jan 13 22:51:38.801007 kubelet[2814]: I0113 22:51:38.800954 2814 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:39.056823 kubelet[2814]: E0113 22:51:39.056759 2814 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-66cd838664\" not found" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:39.154313 kubelet[2814]: I0113 22:51:39.154266 2814 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:39.217140 kubelet[2814]: I0113 22:51:39.217119 2814 apiserver.go:52] "Watching apiserver" Jan 13 22:51:39.222496 kubelet[2814]: I0113 22:51:39.222486 2814 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 22:51:39.240427 kubelet[2814]: E0113 22:51:39.240414 2814 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-66cd838664\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:39.240427 kubelet[2814]: E0113 22:51:39.240420 2814 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:39.240427 kubelet[2814]: E0113 22:51:39.240414 2814 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-66cd838664\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.0-a-66cd838664" Jan 13 22:51:40.241354 kubelet[2814]: W0113 22:51:40.241333 2814 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:51:41.310334 systemd[1]: Reloading requested from client PID 3128 ('systemctl') (unit session-11.scope)... Jan 13 22:51:41.310366 systemd[1]: Reloading... Jan 13 22:51:41.369230 zram_generator::config[3167]: No configuration found. Jan 13 22:51:41.436399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:51:41.509361 systemd[1]: Reloading finished in 198 ms. Jan 13 22:51:41.537314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:41.543789 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 22:51:41.543896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:41.553607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:51:41.780887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:51:41.787397 (kubelet)[3231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:51:41.820103 kubelet[3231]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:51:41.820103 kubelet[3231]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:51:41.820103 kubelet[3231]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:51:41.820103 kubelet[3231]: I0113 22:51:41.820096 3231 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:51:41.823238 kubelet[3231]: I0113 22:51:41.823224 3231 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 22:51:41.823238 kubelet[3231]: I0113 22:51:41.823238 3231 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:51:41.823350 kubelet[3231]: I0113 22:51:41.823343 3231 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 22:51:41.824092 kubelet[3231]: I0113 22:51:41.824084 3231 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 22:51:41.824710 kubelet[3231]: I0113 22:51:41.824695 3231 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:51:41.832855 kubelet[3231]: I0113 22:51:41.832814 3231 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:51:41.832956 kubelet[3231]: I0113 22:51:41.832916 3231 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:51:41.833019 kubelet[3231]: I0113 22:51:41.832930 3231 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-66cd838664","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 22:51:41.833076 kubelet[3231]: I0113 22:51:41.833028 3231 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:51:41.833076 kubelet[3231]: I0113 22:51:41.833034 3231 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 22:51:41.833076 kubelet[3231]: I0113 22:51:41.833059 3231 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:51:41.833135 kubelet[3231]: I0113 22:51:41.833105 3231 kubelet.go:400] "Attempting to sync node with API server" Jan 13 22:51:41.833135 kubelet[3231]: I0113 22:51:41.833111 3231 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:51:41.833135 kubelet[3231]: I0113 22:51:41.833122 3231 kubelet.go:312] "Adding apiserver pod source" Jan 13 22:51:41.833135 kubelet[3231]: I0113 22:51:41.833133 3231 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:51:41.833826 kubelet[3231]: I0113 22:51:41.833808 3231 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 22:51:41.833970 kubelet[3231]: I0113 22:51:41.833961 3231 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:51:41.834331 kubelet[3231]: I0113 22:51:41.834316 3231 server.go:1264] "Started kubelet" Jan 13 22:51:41.834560 kubelet[3231]: I0113 22:51:41.834428 3231 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:51:41.834649 kubelet[3231]: I0113 22:51:41.834589 3231 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:51:41.834989 kubelet[3231]: I0113 22:51:41.834979 3231 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:51:41.835445 kubelet[3231]: I0113 22:51:41.835435 3231 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:51:41.835491 kubelet[3231]: I0113 22:51:41.835466 3231 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 22:51:41.835491 kubelet[3231]: E0113 22:51:41.835473 3231 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-66cd838664\" not found" Jan 13 22:51:41.835545 kubelet[3231]: I0113 22:51:41.835492 3231 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 22:51:41.835604 kubelet[3231]: I0113 22:51:41.835587 3231 reconciler.go:26] "Reconciler: start to sync state" Jan 13 22:51:41.835734 kubelet[3231]: E0113 22:51:41.835720 3231 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:51:41.835962 kubelet[3231]: I0113 22:51:41.835749 3231 server.go:455] "Adding debug handlers to kubelet server" Jan 13 22:51:41.835962 kubelet[3231]: I0113 22:51:41.835897 3231 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:51:41.836021 kubelet[3231]: I0113 22:51:41.835957 3231 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:51:41.836476 kubelet[3231]: I0113 22:51:41.836467 3231 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:51:41.841030 kubelet[3231]: I0113 22:51:41.841002 3231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:51:41.841637 kubelet[3231]: I0113 22:51:41.841629 3231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:51:41.841670 kubelet[3231]: I0113 22:51:41.841648 3231 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:51:41.841670 kubelet[3231]: I0113 22:51:41.841661 3231 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 22:51:41.841712 kubelet[3231]: E0113 22:51:41.841693 3231 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:51:41.854552 kubelet[3231]: I0113 22:51:41.854507 3231 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:51:41.854552 kubelet[3231]: I0113 22:51:41.854518 3231 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:51:41.854552 kubelet[3231]: I0113 22:51:41.854554 3231 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:51:41.854744 kubelet[3231]: I0113 22:51:41.854726 3231 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 22:51:41.854744 kubelet[3231]: I0113 22:51:41.854732 3231 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 22:51:41.854744 kubelet[3231]: I0113 22:51:41.854745 3231 policy_none.go:49] "None policy: Start" Jan 13 22:51:41.855140 kubelet[3231]: I0113 22:51:41.855132 3231 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:51:41.855167 kubelet[3231]: I0113 22:51:41.855144 3231 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:51:41.855280 kubelet[3231]: I0113 22:51:41.855249 3231 state_mem.go:75] "Updated machine memory state" Jan 13 22:51:41.857175 kubelet[3231]: I0113 22:51:41.857133 3231 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:51:41.857336 kubelet[3231]: I0113 22:51:41.857318 3231 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 22:51:41.857379 kubelet[3231]: I0113 22:51:41.857372 3231 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:51:41.941466 kubelet[3231]: I0113 22:51:41.941375 3231 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:41.942010 kubelet[3231]: I0113 22:51:41.941894 3231 topology_manager.go:215] "Topology Admit Handler" podUID="52af6927de5c5caeb7cb4f7bbb39d9da" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:41.942216 kubelet[3231]: I0113 22:51:41.942060 3231 topology_manager.go:215] "Topology Admit Handler" podUID="4996540e37e586f21a725f37d94d1bcc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:41.942333 kubelet[3231]: I0113 22:51:41.942220 3231 topology_manager.go:215] "Topology Admit Handler" podUID="cae6e83a88a26e166eb85214d57d8de0" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-66cd838664" Jan 13 22:51:41.949808 kubelet[3231]: W0113 22:51:41.949743 3231 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:51:41.950021 kubelet[3231]: W0113 22:51:41.949949 3231 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:51:41.950456 kubelet[3231]: I0113 22:51:41.950408 3231 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:41.950608 kubelet[3231]: I0113 22:51:41.950571 3231 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-66cd838664" Jan 13 22:51:41.950778 kubelet[3231]: W0113 22:51:41.950732 3231 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:51:41.950953 kubelet[3231]: E0113 22:51:41.950917 3231 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-66cd838664\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037302 kubelet[3231]: I0113 22:51:42.037044 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037302 kubelet[3231]: I0113 22:51:42.037141 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037302 kubelet[3231]: I0113 22:51:42.037275 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037730 kubelet[3231]: I0113 22:51:42.037346 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52af6927de5c5caeb7cb4f7bbb39d9da-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-66cd838664\" (UID: \"52af6927de5c5caeb7cb4f7bbb39d9da\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037730 kubelet[3231]: I0113 22:51:42.037398 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52af6927de5c5caeb7cb4f7bbb39d9da-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-66cd838664\" (UID: \"52af6927de5c5caeb7cb4f7bbb39d9da\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037730 kubelet[3231]: I0113 22:51:42.037446 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037730 kubelet[3231]: I0113 22:51:42.037516 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cae6e83a88a26e166eb85214d57d8de0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-66cd838664\" (UID: \"cae6e83a88a26e166eb85214d57d8de0\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.037730 kubelet[3231]: I0113 22:51:42.037568 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52af6927de5c5caeb7cb4f7bbb39d9da-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-66cd838664\" (UID: \"52af6927de5c5caeb7cb4f7bbb39d9da\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.038210 kubelet[3231]: I0113 22:51:42.037621 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4996540e37e586f21a725f37d94d1bcc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-66cd838664\" (UID: \"4996540e37e586f21a725f37d94d1bcc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.834389 kubelet[3231]: I0113 22:51:42.834258 3231 apiserver.go:52] "Watching apiserver" Jan 13 22:51:42.853800 kubelet[3231]: W0113 22:51:42.853760 3231 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:51:42.853800 kubelet[3231]: W0113 22:51:42.853781 3231 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 22:51:42.853800 kubelet[3231]: E0113 22:51:42.853792 3231 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-66cd838664\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.853891 kubelet[3231]: E0113 22:51:42.853804 3231 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-66cd838664\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-66cd838664" Jan 13 22:51:42.865274 kubelet[3231]: I0113 22:51:42.865209 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-66cd838664" podStartSLOduration=2.86519866 podStartE2EDuration="2.86519866s" podCreationTimestamp="2025-01-13 22:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:51:42.860398143 +0000 UTC m=+1.070867300" watchObservedRunningTime="2025-01-13 22:51:42.86519866 +0000 UTC m=+1.075667811" Jan 13 22:51:42.869878 kubelet[3231]: I0113 22:51:42.869731 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-66cd838664" podStartSLOduration=1.86968654 podStartE2EDuration="1.86968654s" podCreationTimestamp="2025-01-13 22:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:51:42.865177459 +0000 UTC m=+1.075646616" watchObservedRunningTime="2025-01-13 22:51:42.86968654 +0000 UTC m=+1.080155768" Jan 13 22:51:42.870149 kubelet[3231]: I0113 22:51:42.870003 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-66cd838664" podStartSLOduration=1.8699791270000001 podStartE2EDuration="1.869979127s" podCreationTimestamp="2025-01-13 22:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:51:42.869837613 +0000 UTC m=+1.080306862" watchObservedRunningTime="2025-01-13 22:51:42.869979127 +0000 UTC m=+1.080448364" Jan 13 22:51:42.936735 kubelet[3231]: I0113 22:51:42.936618 3231 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 22:51:46.229196 sudo[2084]: pam_unix(sudo:session): session closed for user root Jan 13 22:51:46.230049 sshd[2081]: pam_unix(sshd:session): session closed for user core Jan 13 22:51:46.231646 systemd[1]: sshd@8-147.28.180.253:22-139.178.89.65:34706.service: Deactivated successfully. Jan 13 22:51:46.232448 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 22:51:46.232530 systemd[1]: session-11.scope: Consumed 3.656s CPU time, 201.8M memory peak, 0B memory swap peak. Jan 13 22:51:46.233066 systemd-logind[1790]: Session 11 logged out. Waiting for processes to exit. Jan 13 22:51:46.233652 systemd-logind[1790]: Removed session 11. Jan 13 22:51:54.189297 update_engine[1795]: I20250113 22:51:54.189224 1795 update_attempter.cc:509] Updating boot flags... Jan 13 22:51:54.220245 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3404) Jan 13 22:51:54.246230 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3406) Jan 13 22:51:57.461500 kubelet[3231]: I0113 22:51:57.461387 3231 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 22:51:57.462543 containerd[1800]: time="2025-01-13T22:51:57.462109784Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 22:51:57.463151 kubelet[3231]: I0113 22:51:57.462609 3231 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 22:51:58.409080 kubelet[3231]: I0113 22:51:58.408993 3231 topology_manager.go:215] "Topology Admit Handler" podUID="13806947-05b8-4901-bc64-ac43c3e312c6" podNamespace="kube-system" podName="kube-proxy-phzj7" Jan 13 22:51:58.425104 systemd[1]: Created slice kubepods-besteffort-pod13806947_05b8_4901_bc64_ac43c3e312c6.slice - libcontainer container kubepods-besteffort-pod13806947_05b8_4901_bc64_ac43c3e312c6.slice. Jan 13 22:51:58.446973 kubelet[3231]: I0113 22:51:58.446894 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13806947-05b8-4901-bc64-ac43c3e312c6-kube-proxy\") pod \"kube-proxy-phzj7\" (UID: \"13806947-05b8-4901-bc64-ac43c3e312c6\") " pod="kube-system/kube-proxy-phzj7" Jan 13 22:51:58.446973 kubelet[3231]: I0113 22:51:58.446944 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13806947-05b8-4901-bc64-ac43c3e312c6-xtables-lock\") pod \"kube-proxy-phzj7\" (UID: \"13806947-05b8-4901-bc64-ac43c3e312c6\") " pod="kube-system/kube-proxy-phzj7" Jan 13 22:51:58.447155 kubelet[3231]: I0113 22:51:58.446979 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13806947-05b8-4901-bc64-ac43c3e312c6-lib-modules\") pod \"kube-proxy-phzj7\" (UID: \"13806947-05b8-4901-bc64-ac43c3e312c6\") " pod="kube-system/kube-proxy-phzj7" Jan 13 22:51:58.447155 kubelet[3231]: I0113 22:51:58.447012 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9mk\" (UniqueName: \"kubernetes.io/projected/13806947-05b8-4901-bc64-ac43c3e312c6-kube-api-access-sq9mk\") pod \"kube-proxy-phzj7\" (UID: \"13806947-05b8-4901-bc64-ac43c3e312c6\") " pod="kube-system/kube-proxy-phzj7" Jan 13 22:51:58.553059 kubelet[3231]: I0113 22:51:58.552977 3231 topology_manager.go:215] "Topology Admit Handler" podUID="38e1cb3e-fea6-4203-989a-851b01f7cbad" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-q42cs" Jan 13 22:51:58.580722 systemd[1]: Created slice kubepods-besteffort-pod38e1cb3e_fea6_4203_989a_851b01f7cbad.slice - libcontainer container kubepods-besteffort-pod38e1cb3e_fea6_4203_989a_851b01f7cbad.slice. Jan 13 22:51:58.647864 kubelet[3231]: I0113 22:51:58.647750 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38e1cb3e-fea6-4203-989a-851b01f7cbad-var-lib-calico\") pod \"tigera-operator-7bc55997bb-q42cs\" (UID: \"38e1cb3e-fea6-4203-989a-851b01f7cbad\") " pod="tigera-operator/tigera-operator-7bc55997bb-q42cs" Jan 13 22:51:58.647864 kubelet[3231]: I0113 22:51:58.647842 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkv5l\" (UniqueName: \"kubernetes.io/projected/38e1cb3e-fea6-4203-989a-851b01f7cbad-kube-api-access-nkv5l\") pod \"tigera-operator-7bc55997bb-q42cs\" (UID: \"38e1cb3e-fea6-4203-989a-851b01f7cbad\") " pod="tigera-operator/tigera-operator-7bc55997bb-q42cs" Jan 13 22:51:58.753423 containerd[1800]: time="2025-01-13T22:51:58.753152006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phzj7,Uid:13806947-05b8-4901-bc64-ac43c3e312c6,Namespace:kube-system,Attempt:0,}" Jan 13 22:51:58.781686 containerd[1800]: time="2025-01-13T22:51:58.781619509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:51:58.781686 containerd[1800]: time="2025-01-13T22:51:58.781647401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:51:58.781686 containerd[1800]: time="2025-01-13T22:51:58.781654150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:58.781803 containerd[1800]: time="2025-01-13T22:51:58.781695225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:58.809451 systemd[1]: Started cri-containerd-bd25ff9e23d4c1a1b0bcfa9fe2a5268bededcace9d5983e5dd249e316bb40acc.scope - libcontainer container bd25ff9e23d4c1a1b0bcfa9fe2a5268bededcace9d5983e5dd249e316bb40acc. Jan 13 22:51:58.821494 containerd[1800]: time="2025-01-13T22:51:58.821466290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-phzj7,Uid:13806947-05b8-4901-bc64-ac43c3e312c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd25ff9e23d4c1a1b0bcfa9fe2a5268bededcace9d5983e5dd249e316bb40acc\"" Jan 13 22:51:58.823269 containerd[1800]: time="2025-01-13T22:51:58.823247676Z" level=info msg="CreateContainer within sandbox \"bd25ff9e23d4c1a1b0bcfa9fe2a5268bededcace9d5983e5dd249e316bb40acc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 22:51:58.829380 containerd[1800]: time="2025-01-13T22:51:58.829363744Z" level=info msg="CreateContainer within sandbox \"bd25ff9e23d4c1a1b0bcfa9fe2a5268bededcace9d5983e5dd249e316bb40acc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"017b90cd6a59e386ca5a6a1ebccffd836951327dd0483cc1b617d2b88d70f8f9\"" Jan 13 22:51:58.829636 containerd[1800]: time="2025-01-13T22:51:58.829590121Z" level=info msg="StartContainer for \"017b90cd6a59e386ca5a6a1ebccffd836951327dd0483cc1b617d2b88d70f8f9\"" Jan 13 22:51:58.849449 systemd[1]: Started cri-containerd-017b90cd6a59e386ca5a6a1ebccffd836951327dd0483cc1b617d2b88d70f8f9.scope - libcontainer container 017b90cd6a59e386ca5a6a1ebccffd836951327dd0483cc1b617d2b88d70f8f9. Jan 13 22:51:58.865584 containerd[1800]: time="2025-01-13T22:51:58.865557251Z" level=info msg="StartContainer for \"017b90cd6a59e386ca5a6a1ebccffd836951327dd0483cc1b617d2b88d70f8f9\" returns successfully" Jan 13 22:51:58.884914 containerd[1800]: time="2025-01-13T22:51:58.884807822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-q42cs,Uid:38e1cb3e-fea6-4203-989a-851b01f7cbad,Namespace:tigera-operator,Attempt:0,}" Jan 13 22:51:58.889163 kubelet[3231]: I0113 22:51:58.889133 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-phzj7" podStartSLOduration=0.889123797 podStartE2EDuration="889.123797ms" podCreationTimestamp="2025-01-13 22:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:51:58.889077365 +0000 UTC m=+17.099546520" watchObservedRunningTime="2025-01-13 22:51:58.889123797 +0000 UTC m=+17.099592950" Jan 13 22:51:58.896135 containerd[1800]: time="2025-01-13T22:51:58.896063762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:51:58.896390 containerd[1800]: time="2025-01-13T22:51:58.896332093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:51:58.896390 containerd[1800]: time="2025-01-13T22:51:58.896348174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:58.896440 containerd[1800]: time="2025-01-13T22:51:58.896419148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:51:58.911392 systemd[1]: Started cri-containerd-ba5a0e34d2898a208d7875b7d4ce21fabc0fda9327bc87b67063d11c4e1aaf77.scope - libcontainer container ba5a0e34d2898a208d7875b7d4ce21fabc0fda9327bc87b67063d11c4e1aaf77. Jan 13 22:51:58.933115 containerd[1800]: time="2025-01-13T22:51:58.933089842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-q42cs,Uid:38e1cb3e-fea6-4203-989a-851b01f7cbad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ba5a0e34d2898a208d7875b7d4ce21fabc0fda9327bc87b67063d11c4e1aaf77\"" Jan 13 22:51:58.933834 containerd[1800]: time="2025-01-13T22:51:58.933822107Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 22:52:00.416519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114651510.mount: Deactivated successfully. Jan 13 22:52:00.643677 containerd[1800]: time="2025-01-13T22:52:00.643626222Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:00.643870 containerd[1800]: time="2025-01-13T22:52:00.643818641Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764345" Jan 13 22:52:00.644130 containerd[1800]: time="2025-01-13T22:52:00.644093231Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:00.645646 containerd[1800]: time="2025-01-13T22:52:00.645598040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:00.645948 containerd[1800]: time="2025-01-13T22:52:00.645905508Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.712066003s" Jan 13 22:52:00.645948 containerd[1800]: time="2025-01-13T22:52:00.645922067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 22:52:00.647052 containerd[1800]: time="2025-01-13T22:52:00.647039479Z" level=info msg="CreateContainer within sandbox \"ba5a0e34d2898a208d7875b7d4ce21fabc0fda9327bc87b67063d11c4e1aaf77\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 22:52:00.650757 containerd[1800]: time="2025-01-13T22:52:00.650714776Z" level=info msg="CreateContainer within sandbox \"ba5a0e34d2898a208d7875b7d4ce21fabc0fda9327bc87b67063d11c4e1aaf77\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bedd83855f4aa9ce386c172484af94d8f80294bc2991d825fa1986322c0e2b16\"" Jan 13 22:52:00.650958 containerd[1800]: time="2025-01-13T22:52:00.650944115Z" level=info msg="StartContainer for \"bedd83855f4aa9ce386c172484af94d8f80294bc2991d825fa1986322c0e2b16\"" Jan 13 22:52:00.672324 systemd[1]: Started cri-containerd-bedd83855f4aa9ce386c172484af94d8f80294bc2991d825fa1986322c0e2b16.scope - libcontainer container bedd83855f4aa9ce386c172484af94d8f80294bc2991d825fa1986322c0e2b16. Jan 13 22:52:00.683408 containerd[1800]: time="2025-01-13T22:52:00.683387263Z" level=info msg="StartContainer for \"bedd83855f4aa9ce386c172484af94d8f80294bc2991d825fa1986322c0e2b16\" returns successfully" Jan 13 22:52:00.905995 kubelet[3231]: I0113 22:52:00.905844 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-q42cs" podStartSLOduration=1.193125781 podStartE2EDuration="2.905809621s" podCreationTimestamp="2025-01-13 22:51:58 +0000 UTC" firstStartedPulling="2025-01-13 22:51:58.933623454 +0000 UTC m=+17.144092608" lastFinishedPulling="2025-01-13 22:52:00.646307296 +0000 UTC m=+18.856776448" observedRunningTime="2025-01-13 22:52:00.905508493 +0000 UTC m=+19.115977719" watchObservedRunningTime="2025-01-13 22:52:00.905809621 +0000 UTC m=+19.116278829" Jan 13 22:52:03.447956 kubelet[3231]: I0113 22:52:03.447900 3231 topology_manager.go:215] "Topology Admit Handler" podUID="34e046e1-f3ce-4606-88c5-c4f8e43d9530" podNamespace="calico-system" podName="calico-typha-75b8976897-c2cv9" Jan 13 22:52:03.455892 systemd[1]: Created slice kubepods-besteffort-pod34e046e1_f3ce_4606_88c5_c4f8e43d9530.slice - libcontainer container kubepods-besteffort-pod34e046e1_f3ce_4606_88c5_c4f8e43d9530.slice. Jan 13 22:52:03.467979 kubelet[3231]: I0113 22:52:03.467954 3231 topology_manager.go:215] "Topology Admit Handler" podUID="855e574d-312c-4866-be4d-01ec2f7ce420" podNamespace="calico-system" podName="calico-node-fglsk" Jan 13 22:52:03.471457 systemd[1]: Created slice kubepods-besteffort-pod855e574d_312c_4866_be4d_01ec2f7ce420.slice - libcontainer container kubepods-besteffort-pod855e574d_312c_4866_be4d_01ec2f7ce420.slice. Jan 13 22:52:03.481456 kubelet[3231]: I0113 22:52:03.481414 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-lib-modules\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481456 kubelet[3231]: I0113 22:52:03.481432 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-var-lib-calico\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481456 kubelet[3231]: I0113 22:52:03.481443 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-flexvol-driver-host\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481456 kubelet[3231]: I0113 22:52:03.481454 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/34e046e1-f3ce-4606-88c5-c4f8e43d9530-typha-certs\") pod \"calico-typha-75b8976897-c2cv9\" (UID: \"34e046e1-f3ce-4606-88c5-c4f8e43d9530\") " pod="calico-system/calico-typha-75b8976897-c2cv9" Jan 13 22:52:03.481571 kubelet[3231]: I0113 22:52:03.481463 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/855e574d-312c-4866-be4d-01ec2f7ce420-node-certs\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481571 kubelet[3231]: I0113 22:52:03.481472 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e046e1-f3ce-4606-88c5-c4f8e43d9530-tigera-ca-bundle\") pod \"calico-typha-75b8976897-c2cv9\" (UID: \"34e046e1-f3ce-4606-88c5-c4f8e43d9530\") " pod="calico-system/calico-typha-75b8976897-c2cv9" Jan 13 22:52:03.481571 kubelet[3231]: I0113 22:52:03.481481 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-cni-net-dir\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481571 kubelet[3231]: I0113 22:52:03.481491 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/855e574d-312c-4866-be4d-01ec2f7ce420-tigera-ca-bundle\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481571 kubelet[3231]: I0113 22:52:03.481500 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrs4\" (UniqueName: \"kubernetes.io/projected/855e574d-312c-4866-be4d-01ec2f7ce420-kube-api-access-nwrs4\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481657 kubelet[3231]: I0113 22:52:03.481525 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45q8w\" (UniqueName: \"kubernetes.io/projected/34e046e1-f3ce-4606-88c5-c4f8e43d9530-kube-api-access-45q8w\") pod \"calico-typha-75b8976897-c2cv9\" (UID: \"34e046e1-f3ce-4606-88c5-c4f8e43d9530\") " pod="calico-system/calico-typha-75b8976897-c2cv9" Jan 13 22:52:03.481657 kubelet[3231]: I0113 22:52:03.481546 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-cni-log-dir\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481657 kubelet[3231]: I0113 22:52:03.481565 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-var-run-calico\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481657 kubelet[3231]: I0113 22:52:03.481574 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-cni-bin-dir\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481657 kubelet[3231]: I0113 22:52:03.481586 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-xtables-lock\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.481742 kubelet[3231]: I0113 22:52:03.481604 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/855e574d-312c-4866-be4d-01ec2f7ce420-policysync\") pod \"calico-node-fglsk\" (UID: \"855e574d-312c-4866-be4d-01ec2f7ce420\") " pod="calico-system/calico-node-fglsk" Jan 13 22:52:03.587199 kubelet[3231]: E0113 22:52:03.587155 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.587199 kubelet[3231]: W0113 22:52:03.587194 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.587415 kubelet[3231]: E0113 22:52:03.587220 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.587487 kubelet[3231]: E0113 22:52:03.587452 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.587487 kubelet[3231]: W0113 22:52:03.587468 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.587487 kubelet[3231]: E0113 22:52:03.587483 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.589445 kubelet[3231]: E0113 22:52:03.589421 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.589445 kubelet[3231]: W0113 22:52:03.589441 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.589599 kubelet[3231]: E0113 22:52:03.589466 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.589769 kubelet[3231]: E0113 22:52:03.589742 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.589769 kubelet[3231]: W0113 22:52:03.589759 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.589769 kubelet[3231]: E0113 22:52:03.589771 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.592632 kubelet[3231]: I0113 22:52:03.592600 3231 topology_manager.go:215] "Topology Admit Handler" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" podNamespace="calico-system" podName="csi-node-driver-llzsr" Jan 13 22:52:03.592963 kubelet[3231]: E0113 22:52:03.592934 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llzsr" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" Jan 13 22:52:03.595362 kubelet[3231]: E0113 22:52:03.595331 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.595362 kubelet[3231]: W0113 22:52:03.595356 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.595589 kubelet[3231]: E0113 22:52:03.595378 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.595589 kubelet[3231]: E0113 22:52:03.595587 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.595694 kubelet[3231]: W0113 22:52:03.595602 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.595694 kubelet[3231]: E0113 22:52:03.595619 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.678065 kubelet[3231]: E0113 22:52:03.678018 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.678065 kubelet[3231]: W0113 22:52:03.678033 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.678065 kubelet[3231]: E0113 22:52:03.678048 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.678262 kubelet[3231]: E0113 22:52:03.678209 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.678262 kubelet[3231]: W0113 22:52:03.678218 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.678262 kubelet[3231]: E0113 22:52:03.678226 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.678434 kubelet[3231]: E0113 22:52:03.678392 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.678434 kubelet[3231]: W0113 22:52:03.678400 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.678434 kubelet[3231]: E0113 22:52:03.678408 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.678620 kubelet[3231]: E0113 22:52:03.678577 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.678620 kubelet[3231]: W0113 22:52:03.678584 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.678620 kubelet[3231]: E0113 22:52:03.678592 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.678797 kubelet[3231]: E0113 22:52:03.678785 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.678797 kubelet[3231]: W0113 22:52:03.678794 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.678889 kubelet[3231]: E0113 22:52:03.678803 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.679005 kubelet[3231]: E0113 22:52:03.678966 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.679005 kubelet[3231]: W0113 22:52:03.678974 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.679005 kubelet[3231]: E0113 22:52:03.678982 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.679135 kubelet[3231]: E0113 22:52:03.679126 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.679135 kubelet[3231]: W0113 22:52:03.679134 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.679221 kubelet[3231]: E0113 22:52:03.679142 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.679322 kubelet[3231]: E0113 22:52:03.679313 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.679322 kubelet[3231]: W0113 22:52:03.679321 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.679385 kubelet[3231]: E0113 22:52:03.679329 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.679530 kubelet[3231]: E0113 22:52:03.679518 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.679561 kubelet[3231]: W0113 22:52:03.679531 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.679561 kubelet[3231]: E0113 22:52:03.679542 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.679756 kubelet[3231]: E0113 22:52:03.679746 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.679790 kubelet[3231]: W0113 22:52:03.679756 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.679790 kubelet[3231]: E0113 22:52:03.679765 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.679943 kubelet[3231]: E0113 22:52:03.679934 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.679943 kubelet[3231]: W0113 22:52:03.679942 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.680012 kubelet[3231]: E0113 22:52:03.679950 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.680129 kubelet[3231]: E0113 22:52:03.680119 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.680129 kubelet[3231]: W0113 22:52:03.680127 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.680211 kubelet[3231]: E0113 22:52:03.680135 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.680327 kubelet[3231]: E0113 22:52:03.680290 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.680327 kubelet[3231]: W0113 22:52:03.680298 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.680327 kubelet[3231]: E0113 22:52:03.680306 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.680502 kubelet[3231]: E0113 22:52:03.680463 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.680502 kubelet[3231]: W0113 22:52:03.680471 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.680502 kubelet[3231]: E0113 22:52:03.680478 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.680685 kubelet[3231]: E0113 22:52:03.680638 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.680685 kubelet[3231]: W0113 22:52:03.680646 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.680685 kubelet[3231]: E0113 22:52:03.680653 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.680835 kubelet[3231]: E0113 22:52:03.680800 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.680835 kubelet[3231]: W0113 22:52:03.680808 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.680835 kubelet[3231]: E0113 22:52:03.680816 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.680995 kubelet[3231]: E0113 22:52:03.680984 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.680995 kubelet[3231]: W0113 22:52:03.680993 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.681081 kubelet[3231]: E0113 22:52:03.681002 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.681149 kubelet[3231]: E0113 22:52:03.681139 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.681198 kubelet[3231]: W0113 22:52:03.681148 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.681198 kubelet[3231]: E0113 22:52:03.681157 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.681362 kubelet[3231]: E0113 22:52:03.681351 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.681398 kubelet[3231]: W0113 22:52:03.681361 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.681398 kubelet[3231]: E0113 22:52:03.681370 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.681562 kubelet[3231]: E0113 22:52:03.681552 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.681597 kubelet[3231]: W0113 22:52:03.681562 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.681597 kubelet[3231]: E0113 22:52:03.681571 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.683949 kubelet[3231]: E0113 22:52:03.683904 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.683949 kubelet[3231]: W0113 22:52:03.683919 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.683949 kubelet[3231]: E0113 22:52:03.683935 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.684099 kubelet[3231]: I0113 22:52:03.683961 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af6ee085-5d31-49a0-b9fd-f2776d6a372b-kubelet-dir\") pod \"csi-node-driver-llzsr\" (UID: \"af6ee085-5d31-49a0-b9fd-f2776d6a372b\") " pod="calico-system/csi-node-driver-llzsr" Jan 13 22:52:03.684166 kubelet[3231]: E0113 22:52:03.684151 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.684166 kubelet[3231]: W0113 22:52:03.684163 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.684253 kubelet[3231]: E0113 22:52:03.684182 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.684253 kubelet[3231]: I0113 22:52:03.684201 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/af6ee085-5d31-49a0-b9fd-f2776d6a372b-varrun\") pod \"csi-node-driver-llzsr\" (UID: \"af6ee085-5d31-49a0-b9fd-f2776d6a372b\") " pod="calico-system/csi-node-driver-llzsr" Jan 13 22:52:03.684435 kubelet[3231]: E0113 22:52:03.684417 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.684435 kubelet[3231]: W0113 22:52:03.684433 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.684527 kubelet[3231]: E0113 22:52:03.684450 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.684527 kubelet[3231]: I0113 22:52:03.684472 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdl6\" (UniqueName: \"kubernetes.io/projected/af6ee085-5d31-49a0-b9fd-f2776d6a372b-kube-api-access-dhdl6\") pod \"csi-node-driver-llzsr\" (UID: \"af6ee085-5d31-49a0-b9fd-f2776d6a372b\") " pod="calico-system/csi-node-driver-llzsr" Jan 13 22:52:03.684767 kubelet[3231]: E0113 22:52:03.684749 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.684767 kubelet[3231]: W0113 22:52:03.684765 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.684860 kubelet[3231]: E0113 22:52:03.684782 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.685036 kubelet[3231]: E0113 22:52:03.685021 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.685036 kubelet[3231]: W0113 22:52:03.685035 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.685124 kubelet[3231]: E0113 22:52:03.685053 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.685321 kubelet[3231]: E0113 22:52:03.685307 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.685321 kubelet[3231]: W0113 22:52:03.685318 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.685410 kubelet[3231]: E0113 22:52:03.685332 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.685536 kubelet[3231]: E0113 22:52:03.685524 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.685536 kubelet[3231]: W0113 22:52:03.685536 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.685611 kubelet[3231]: E0113 22:52:03.685549 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.685740 kubelet[3231]: E0113 22:52:03.685713 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.685740 kubelet[3231]: W0113 22:52:03.685722 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.685834 kubelet[3231]: E0113 22:52:03.685750 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.685834 kubelet[3231]: I0113 22:52:03.685772 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af6ee085-5d31-49a0-b9fd-f2776d6a372b-registration-dir\") pod \"csi-node-driver-llzsr\" (UID: \"af6ee085-5d31-49a0-b9fd-f2776d6a372b\") " pod="calico-system/csi-node-driver-llzsr" Jan 13 22:52:03.686025 kubelet[3231]: E0113 22:52:03.686010 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.686025 kubelet[3231]: W0113 22:52:03.686023 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.686105 kubelet[3231]: E0113 22:52:03.686073 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.686150 kubelet[3231]: I0113 22:52:03.686114 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af6ee085-5d31-49a0-b9fd-f2776d6a372b-socket-dir\") pod \"csi-node-driver-llzsr\" (UID: \"af6ee085-5d31-49a0-b9fd-f2776d6a372b\") " pod="calico-system/csi-node-driver-llzsr" Jan 13 22:52:03.686314 kubelet[3231]: E0113 22:52:03.686302 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.686368 kubelet[3231]: W0113 22:52:03.686314 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.686368 kubelet[3231]: E0113 22:52:03.686343 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.686563 kubelet[3231]: E0113 22:52:03.686552 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.686620 kubelet[3231]: W0113 22:52:03.686562 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.686620 kubelet[3231]: E0113 22:52:03.686576 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.686843 kubelet[3231]: E0113 22:52:03.686829 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.686843 kubelet[3231]: W0113 22:52:03.686841 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.686927 kubelet[3231]: E0113 22:52:03.686855 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.687052 kubelet[3231]: E0113 22:52:03.687041 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.687100 kubelet[3231]: W0113 22:52:03.687052 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.687100 kubelet[3231]: E0113 22:52:03.687062 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.687321 kubelet[3231]: E0113 22:52:03.687305 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.687372 kubelet[3231]: W0113 22:52:03.687322 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.687372 kubelet[3231]: E0113 22:52:03.687337 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.687653 kubelet[3231]: E0113 22:52:03.687609 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.687653 kubelet[3231]: W0113 22:52:03.687622 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.687653 kubelet[3231]: E0113 22:52:03.687635 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.760012 containerd[1800]: time="2025-01-13T22:52:03.759774002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b8976897-c2cv9,Uid:34e046e1-f3ce-4606-88c5-c4f8e43d9530,Namespace:calico-system,Attempt:0,}" Jan 13 22:52:03.771553 containerd[1800]: time="2025-01-13T22:52:03.771478090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:03.771553 containerd[1800]: time="2025-01-13T22:52:03.771507790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:03.771553 containerd[1800]: time="2025-01-13T22:52:03.771514914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:03.771668 containerd[1800]: time="2025-01-13T22:52:03.771555909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:03.773333 containerd[1800]: time="2025-01-13T22:52:03.773313236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fglsk,Uid:855e574d-312c-4866-be4d-01ec2f7ce420,Namespace:calico-system,Attempt:0,}" Jan 13 22:52:03.782277 containerd[1800]: time="2025-01-13T22:52:03.782211086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:03.782277 containerd[1800]: time="2025-01-13T22:52:03.782237369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:03.782486 containerd[1800]: time="2025-01-13T22:52:03.782245187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:03.782486 containerd[1800]: time="2025-01-13T22:52:03.782458450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:03.787124 kubelet[3231]: E0113 22:52:03.787107 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787124 kubelet[3231]: W0113 22:52:03.787119 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787231 kubelet[3231]: E0113 22:52:03.787132 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.787258 kubelet[3231]: E0113 22:52:03.787251 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787279 kubelet[3231]: W0113 22:52:03.787258 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787279 kubelet[3231]: E0113 22:52:03.787265 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.787402 kubelet[3231]: E0113 22:52:03.787395 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787426 kubelet[3231]: W0113 22:52:03.787402 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787426 kubelet[3231]: E0113 22:52:03.787410 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.787549 kubelet[3231]: E0113 22:52:03.787541 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787549 kubelet[3231]: W0113 22:52:03.787546 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787615 kubelet[3231]: E0113 22:52:03.787552 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.787664 kubelet[3231]: E0113 22:52:03.787658 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787664 kubelet[3231]: W0113 22:52:03.787663 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787727 kubelet[3231]: E0113 22:52:03.787668 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.787790 kubelet[3231]: E0113 22:52:03.787783 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787790 kubelet[3231]: W0113 22:52:03.787788 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787858 kubelet[3231]: E0113 22:52:03.787794 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.787858 kubelet[3231]: E0113 22:52:03.787858 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787918 kubelet[3231]: W0113 22:52:03.787862 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787918 kubelet[3231]: E0113 22:52:03.787866 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.787976 kubelet[3231]: E0113 22:52:03.787951 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.787976 kubelet[3231]: W0113 22:52:03.787959 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.787976 kubelet[3231]: E0113 22:52:03.787967 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788053 kubelet[3231]: E0113 22:52:03.788037 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788053 kubelet[3231]: W0113 22:52:03.788042 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788053 kubelet[3231]: E0113 22:52:03.788048 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788119 kubelet[3231]: E0113 22:52:03.788114 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788137 kubelet[3231]: W0113 22:52:03.788119 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788137 kubelet[3231]: E0113 22:52:03.788124 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788257 kubelet[3231]: E0113 22:52:03.788250 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788276 kubelet[3231]: W0113 22:52:03.788259 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788276 kubelet[3231]: E0113 22:52:03.788268 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788344 kubelet[3231]: E0113 22:52:03.788339 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788363 kubelet[3231]: W0113 22:52:03.788344 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788363 kubelet[3231]: E0113 22:52:03.788349 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788419 kubelet[3231]: E0113 22:52:03.788415 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788438 kubelet[3231]: W0113 22:52:03.788419 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788438 kubelet[3231]: E0113 22:52:03.788425 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788522 kubelet[3231]: E0113 22:52:03.788516 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788547 kubelet[3231]: W0113 22:52:03.788522 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788547 kubelet[3231]: E0113 22:52:03.788529 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788612 kubelet[3231]: E0113 22:52:03.788607 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788612 kubelet[3231]: W0113 22:52:03.788612 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788647 kubelet[3231]: E0113 22:52:03.788618 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788685 kubelet[3231]: E0113 22:52:03.788680 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788702 kubelet[3231]: W0113 22:52:03.788685 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788702 kubelet[3231]: E0113 22:52:03.788694 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788753 kubelet[3231]: E0113 22:52:03.788748 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788770 kubelet[3231]: W0113 22:52:03.788753 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788770 kubelet[3231]: E0113 22:52:03.788762 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788821 kubelet[3231]: E0113 22:52:03.788816 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788837 kubelet[3231]: W0113 22:52:03.788821 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788837 kubelet[3231]: E0113 22:52:03.788826 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788909 kubelet[3231]: E0113 22:52:03.788904 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.788928 kubelet[3231]: W0113 22:52:03.788909 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.788928 kubelet[3231]: E0113 22:52:03.788914 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.788992 kubelet[3231]: E0113 22:52:03.788987 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.789011 kubelet[3231]: W0113 22:52:03.788992 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.789011 kubelet[3231]: E0113 22:52:03.788997 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.789067 kubelet[3231]: E0113 22:52:03.789063 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.789084 kubelet[3231]: W0113 22:52:03.789067 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.789084 kubelet[3231]: E0113 22:52:03.789072 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.789149 kubelet[3231]: E0113 22:52:03.789145 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.789173 kubelet[3231]: W0113 22:52:03.789149 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.789173 kubelet[3231]: E0113 22:52:03.789154 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.789230 kubelet[3231]: E0113 22:52:03.789225 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.789247 kubelet[3231]: W0113 22:52:03.789230 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.789247 kubelet[3231]: E0113 22:52:03.789234 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.789344 kubelet[3231]: E0113 22:52:03.789340 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.789362 kubelet[3231]: W0113 22:52:03.789344 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.789362 kubelet[3231]: E0113 22:52:03.789348 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.789427 kubelet[3231]: E0113 22:52:03.789422 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.789445 kubelet[3231]: W0113 22:52:03.789427 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.789445 kubelet[3231]: E0113 22:52:03.789431 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.790334 systemd[1]: Started cri-containerd-8f476683e7e1ec50d10309c5319a69333939fdb009973e1210c0b97d88aaf5ae.scope - libcontainer container 8f476683e7e1ec50d10309c5319a69333939fdb009973e1210c0b97d88aaf5ae. Jan 13 22:52:03.792160 systemd[1]: Started cri-containerd-acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80.scope - libcontainer container acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80. Jan 13 22:52:03.793799 kubelet[3231]: E0113 22:52:03.793789 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 22:52:03.793799 kubelet[3231]: W0113 22:52:03.793799 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 22:52:03.793865 kubelet[3231]: E0113 22:52:03.793811 3231 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 22:52:03.801445 containerd[1800]: time="2025-01-13T22:52:03.801419415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fglsk,Uid:855e574d-312c-4866-be4d-01ec2f7ce420,Namespace:calico-system,Attempt:0,} returns sandbox id \"acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80\"" Jan 13 22:52:03.802183 containerd[1800]: time="2025-01-13T22:52:03.802163879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 22:52:03.812480 containerd[1800]: time="2025-01-13T22:52:03.812433918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b8976897-c2cv9,Uid:34e046e1-f3ce-4606-88c5-c4f8e43d9530,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f476683e7e1ec50d10309c5319a69333939fdb009973e1210c0b97d88aaf5ae\"" Jan 13 22:52:04.842590 kubelet[3231]: E0113 22:52:04.842460 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llzsr" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" Jan 13 22:52:05.375054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075518802.mount: Deactivated successfully. Jan 13 22:52:05.436385 containerd[1800]: time="2025-01-13T22:52:05.436334765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:05.436595 containerd[1800]: time="2025-01-13T22:52:05.436525035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 22:52:05.436901 containerd[1800]: time="2025-01-13T22:52:05.436857850Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:05.437853 containerd[1800]: time="2025-01-13T22:52:05.437810863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:05.438220 containerd[1800]: time="2025-01-13T22:52:05.438201775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.636009905s" Jan 13 22:52:05.438243 containerd[1800]: time="2025-01-13T22:52:05.438219388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 22:52:05.438775 containerd[1800]: time="2025-01-13T22:52:05.438735263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 22:52:05.439368 containerd[1800]: time="2025-01-13T22:52:05.439322876Z" level=info msg="CreateContainer within sandbox \"acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 22:52:05.463144 containerd[1800]: time="2025-01-13T22:52:05.463099906Z" level=info msg="CreateContainer within sandbox \"acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf\"" Jan 13 22:52:05.463440 containerd[1800]: time="2025-01-13T22:52:05.463380692Z" level=info msg="StartContainer for \"c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf\"" Jan 13 22:52:05.487354 systemd[1]: Started cri-containerd-c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf.scope - libcontainer container c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf. Jan 13 22:52:05.503293 containerd[1800]: time="2025-01-13T22:52:05.503262956Z" level=info msg="StartContainer for \"c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf\" returns successfully" Jan 13 22:52:05.511270 systemd[1]: cri-containerd-c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf.scope: Deactivated successfully. Jan 13 22:52:05.590187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf-rootfs.mount: Deactivated successfully. Jan 13 22:52:05.759592 containerd[1800]: time="2025-01-13T22:52:05.759558092Z" level=info msg="shim disconnected" id=c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf namespace=k8s.io Jan 13 22:52:05.759592 containerd[1800]: time="2025-01-13T22:52:05.759589826Z" level=warning msg="cleaning up after shim disconnected" id=c4ca7af235297d0015c6ab1acdb958b4ce6acba18332e07dd5815f31232c7ccf namespace=k8s.io Jan 13 22:52:05.759592 containerd[1800]: time="2025-01-13T22:52:05.759595098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:52:06.841828 kubelet[3231]: E0113 22:52:06.841801 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llzsr" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" Jan 13 22:52:07.472665 containerd[1800]: time="2025-01-13T22:52:07.472609686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:07.472890 containerd[1800]: time="2025-01-13T22:52:07.472861125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 22:52:07.473213 containerd[1800]: time="2025-01-13T22:52:07.473194740Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:07.474191 containerd[1800]: time="2025-01-13T22:52:07.474145022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:07.474592 containerd[1800]: time="2025-01-13T22:52:07.474548810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.03579901s" Jan 13 22:52:07.474592 containerd[1800]: time="2025-01-13T22:52:07.474566101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 22:52:07.475034 containerd[1800]: time="2025-01-13T22:52:07.475022511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 22:52:07.478173 containerd[1800]: time="2025-01-13T22:52:07.478152278Z" level=info msg="CreateContainer within sandbox \"8f476683e7e1ec50d10309c5319a69333939fdb009973e1210c0b97d88aaf5ae\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 22:52:07.482862 containerd[1800]: time="2025-01-13T22:52:07.482816249Z" level=info msg="CreateContainer within sandbox \"8f476683e7e1ec50d10309c5319a69333939fdb009973e1210c0b97d88aaf5ae\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2a43f5067e16f7e95deb8771621e1c9930179fca41e443993b0bcced33e63ef3\"" Jan 13 22:52:07.483078 containerd[1800]: time="2025-01-13T22:52:07.483065920Z" level=info msg="StartContainer for \"2a43f5067e16f7e95deb8771621e1c9930179fca41e443993b0bcced33e63ef3\"" Jan 13 22:52:07.503315 systemd[1]: Started cri-containerd-2a43f5067e16f7e95deb8771621e1c9930179fca41e443993b0bcced33e63ef3.scope - libcontainer container 2a43f5067e16f7e95deb8771621e1c9930179fca41e443993b0bcced33e63ef3. Jan 13 22:52:07.527666 containerd[1800]: time="2025-01-13T22:52:07.527644010Z" level=info msg="StartContainer for \"2a43f5067e16f7e95deb8771621e1c9930179fca41e443993b0bcced33e63ef3\" returns successfully" Jan 13 22:52:07.930285 kubelet[3231]: I0113 22:52:07.930140 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75b8976897-c2cv9" podStartSLOduration=1.268105419 podStartE2EDuration="4.930105267s" podCreationTimestamp="2025-01-13 22:52:03 +0000 UTC" firstStartedPulling="2025-01-13 22:52:03.812961785 +0000 UTC m=+22.023430938" lastFinishedPulling="2025-01-13 22:52:07.474961634 +0000 UTC m=+25.685430786" observedRunningTime="2025-01-13 22:52:07.92935319 +0000 UTC m=+26.139822408" watchObservedRunningTime="2025-01-13 22:52:07.930105267 +0000 UTC m=+26.140574471" Jan 13 22:52:08.842956 kubelet[3231]: E0113 22:52:08.842878 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llzsr" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" Jan 13 22:52:08.911320 kubelet[3231]: I0113 22:52:08.911241 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:52:10.842853 kubelet[3231]: E0113 22:52:10.842823 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llzsr" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" Jan 13 22:52:11.124144 containerd[1800]: time="2025-01-13T22:52:11.124056573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:11.124353 containerd[1800]: time="2025-01-13T22:52:11.124222179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 22:52:11.124609 containerd[1800]: time="2025-01-13T22:52:11.124560342Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:11.125808 containerd[1800]: time="2025-01-13T22:52:11.125765765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:11.126237 containerd[1800]: time="2025-01-13T22:52:11.126213764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.65117493s" Jan 13 22:52:11.126237 containerd[1800]: time="2025-01-13T22:52:11.126230752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 22:52:11.127263 containerd[1800]: time="2025-01-13T22:52:11.127250837Z" level=info msg="CreateContainer within sandbox \"acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 22:52:11.133338 containerd[1800]: time="2025-01-13T22:52:11.133296426Z" level=info msg="CreateContainer within sandbox \"acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c\"" Jan 13 22:52:11.133525 containerd[1800]: time="2025-01-13T22:52:11.133484652Z" level=info msg="StartContainer for \"b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c\"" Jan 13 22:52:11.163467 systemd[1]: Started cri-containerd-b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c.scope - libcontainer container b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c. Jan 13 22:52:11.176868 containerd[1800]: time="2025-01-13T22:52:11.176841080Z" level=info msg="StartContainer for \"b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c\" returns successfully" Jan 13 22:52:11.711898 systemd[1]: cri-containerd-b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c.scope: Deactivated successfully. Jan 13 22:52:11.722267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c-rootfs.mount: Deactivated successfully. Jan 13 22:52:11.808371 kubelet[3231]: I0113 22:52:11.808266 3231 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 22:52:11.850129 kubelet[3231]: I0113 22:52:11.850028 3231 topology_manager.go:215] "Topology Admit Handler" podUID="57287e2c-ee93-4725-90f0-5a85ad1e8a1d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qm4qb" Jan 13 22:52:11.851132 kubelet[3231]: I0113 22:52:11.850436 3231 topology_manager.go:215] "Topology Admit Handler" podUID="e4de7a75-1548-4187-ac18-e9507e8aad8c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wkl2k" Jan 13 22:52:11.851132 kubelet[3231]: I0113 22:52:11.850710 3231 topology_manager.go:215] "Topology Admit Handler" podUID="b321a1c9-994b-4ad5-b109-33a2a9e1a4fd" podNamespace="calico-system" podName="calico-kube-controllers-7fb548d8cc-jp5jw" Jan 13 22:52:11.851571 kubelet[3231]: I0113 22:52:11.851249 3231 topology_manager.go:215] "Topology Admit Handler" podUID="d95e8d7f-4fe7-40df-b458-d012e6c10560" podNamespace="calico-apiserver" podName="calico-apiserver-58f7bcd9bd-wwzzd" Jan 13 22:52:11.851717 kubelet[3231]: I0113 22:52:11.851601 3231 topology_manager.go:215] "Topology Admit Handler" podUID="1da9aa39-afba-437d-9d6b-f05365623329" podNamespace="calico-apiserver" podName="calico-apiserver-58f7bcd9bd-q8lwv" Jan 13 22:52:11.867701 systemd[1]: Created slice kubepods-burstable-pod57287e2c_ee93_4725_90f0_5a85ad1e8a1d.slice - libcontainer container kubepods-burstable-pod57287e2c_ee93_4725_90f0_5a85ad1e8a1d.slice. Jan 13 22:52:11.883110 systemd[1]: Created slice kubepods-burstable-pode4de7a75_1548_4187_ac18_e9507e8aad8c.slice - libcontainer container kubepods-burstable-pode4de7a75_1548_4187_ac18_e9507e8aad8c.slice. Jan 13 22:52:11.891213 systemd[1]: Created slice kubepods-besteffort-podb321a1c9_994b_4ad5_b109_33a2a9e1a4fd.slice - libcontainer container kubepods-besteffort-podb321a1c9_994b_4ad5_b109_33a2a9e1a4fd.slice. Jan 13 22:52:11.894796 systemd[1]: Created slice kubepods-besteffort-podd95e8d7f_4fe7_40df_b458_d012e6c10560.slice - libcontainer container kubepods-besteffort-podd95e8d7f_4fe7_40df_b458_d012e6c10560.slice. Jan 13 22:52:11.898025 systemd[1]: Created slice kubepods-besteffort-pod1da9aa39_afba_437d_9d6b_f05365623329.slice - libcontainer container kubepods-besteffort-pod1da9aa39_afba_437d_9d6b_f05365623329.slice. Jan 13 22:52:12.045344 kubelet[3231]: I0113 22:52:12.045022 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4h4s\" (UniqueName: \"kubernetes.io/projected/b321a1c9-994b-4ad5-b109-33a2a9e1a4fd-kube-api-access-x4h4s\") pod \"calico-kube-controllers-7fb548d8cc-jp5jw\" (UID: \"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd\") " pod="calico-system/calico-kube-controllers-7fb548d8cc-jp5jw" Jan 13 22:52:12.045344 kubelet[3231]: I0113 22:52:12.045138 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4ck\" (UniqueName: \"kubernetes.io/projected/d95e8d7f-4fe7-40df-b458-d012e6c10560-kube-api-access-kt4ck\") pod \"calico-apiserver-58f7bcd9bd-wwzzd\" (UID: \"d95e8d7f-4fe7-40df-b458-d012e6c10560\") " pod="calico-apiserver/calico-apiserver-58f7bcd9bd-wwzzd" Jan 13 22:52:12.045966 kubelet[3231]: I0113 22:52:12.045441 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k92ns\" (UniqueName: \"kubernetes.io/projected/57287e2c-ee93-4725-90f0-5a85ad1e8a1d-kube-api-access-k92ns\") pod \"coredns-7db6d8ff4d-qm4qb\" (UID: \"57287e2c-ee93-4725-90f0-5a85ad1e8a1d\") " pod="kube-system/coredns-7db6d8ff4d-qm4qb" Jan 13 22:52:12.045966 kubelet[3231]: I0113 22:52:12.045538 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkdnj\" (UniqueName: \"kubernetes.io/projected/e4de7a75-1548-4187-ac18-e9507e8aad8c-kube-api-access-dkdnj\") pod \"coredns-7db6d8ff4d-wkl2k\" (UID: \"e4de7a75-1548-4187-ac18-e9507e8aad8c\") " pod="kube-system/coredns-7db6d8ff4d-wkl2k" Jan 13 22:52:12.045966 kubelet[3231]: I0113 22:52:12.045664 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w7h7\" (UniqueName: \"kubernetes.io/projected/1da9aa39-afba-437d-9d6b-f05365623329-kube-api-access-5w7h7\") pod \"calico-apiserver-58f7bcd9bd-q8lwv\" (UID: \"1da9aa39-afba-437d-9d6b-f05365623329\") " pod="calico-apiserver/calico-apiserver-58f7bcd9bd-q8lwv" Jan 13 22:52:12.045966 kubelet[3231]: I0113 22:52:12.045730 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b321a1c9-994b-4ad5-b109-33a2a9e1a4fd-tigera-ca-bundle\") pod \"calico-kube-controllers-7fb548d8cc-jp5jw\" (UID: \"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd\") " pod="calico-system/calico-kube-controllers-7fb548d8cc-jp5jw" Jan 13 22:52:12.045966 kubelet[3231]: I0113 22:52:12.045821 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d95e8d7f-4fe7-40df-b458-d012e6c10560-calico-apiserver-certs\") pod \"calico-apiserver-58f7bcd9bd-wwzzd\" (UID: \"d95e8d7f-4fe7-40df-b458-d012e6c10560\") " pod="calico-apiserver/calico-apiserver-58f7bcd9bd-wwzzd" Jan 13 22:52:12.047090 kubelet[3231]: I0113 22:52:12.045890 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4de7a75-1548-4187-ac18-e9507e8aad8c-config-volume\") pod \"coredns-7db6d8ff4d-wkl2k\" (UID: \"e4de7a75-1548-4187-ac18-e9507e8aad8c\") " pod="kube-system/coredns-7db6d8ff4d-wkl2k" Jan 13 22:52:12.047090 kubelet[3231]: I0113 22:52:12.045944 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57287e2c-ee93-4725-90f0-5a85ad1e8a1d-config-volume\") pod \"coredns-7db6d8ff4d-qm4qb\" (UID: \"57287e2c-ee93-4725-90f0-5a85ad1e8a1d\") " pod="kube-system/coredns-7db6d8ff4d-qm4qb" Jan 13 22:52:12.047090 kubelet[3231]: I0113 22:52:12.045991 3231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1da9aa39-afba-437d-9d6b-f05365623329-calico-apiserver-certs\") pod \"calico-apiserver-58f7bcd9bd-q8lwv\" (UID: \"1da9aa39-afba-437d-9d6b-f05365623329\") " pod="calico-apiserver/calico-apiserver-58f7bcd9bd-q8lwv" Jan 13 22:52:12.176858 containerd[1800]: time="2025-01-13T22:52:12.176758886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qm4qb,Uid:57287e2c-ee93-4725-90f0-5a85ad1e8a1d,Namespace:kube-system,Attempt:0,}" Jan 13 22:52:12.191749 containerd[1800]: time="2025-01-13T22:52:12.191663941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wkl2k,Uid:e4de7a75-1548-4187-ac18-e9507e8aad8c,Namespace:kube-system,Attempt:0,}" Jan 13 22:52:12.193661 containerd[1800]: time="2025-01-13T22:52:12.193576108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb548d8cc-jp5jw,Uid:b321a1c9-994b-4ad5-b109-33a2a9e1a4fd,Namespace:calico-system,Attempt:0,}" Jan 13 22:52:12.197632 containerd[1800]: time="2025-01-13T22:52:12.197550342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-wwzzd,Uid:d95e8d7f-4fe7-40df-b458-d012e6c10560,Namespace:calico-apiserver,Attempt:0,}" Jan 13 22:52:12.200735 containerd[1800]: time="2025-01-13T22:52:12.200641571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-q8lwv,Uid:1da9aa39-afba-437d-9d6b-f05365623329,Namespace:calico-apiserver,Attempt:0,}" Jan 13 22:52:12.376897 containerd[1800]: time="2025-01-13T22:52:12.376749166Z" level=info msg="shim disconnected" id=b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c namespace=k8s.io Jan 13 22:52:12.376897 containerd[1800]: time="2025-01-13T22:52:12.376813068Z" level=warning msg="cleaning up after shim disconnected" id=b25f757d2ae828f7a97a93cc903d18ca9d8af1352de737fdb49e252c865f3f8c namespace=k8s.io Jan 13 22:52:12.376897 containerd[1800]: time="2025-01-13T22:52:12.376819027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:52:12.416370 containerd[1800]: time="2025-01-13T22:52:12.416302627Z" level=error msg="Failed to destroy network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.416581 containerd[1800]: time="2025-01-13T22:52:12.416562468Z" level=error msg="encountered an error cleaning up failed sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.416620 containerd[1800]: time="2025-01-13T22:52:12.416603572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wkl2k,Uid:e4de7a75-1548-4187-ac18-e9507e8aad8c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.416789 kubelet[3231]: E0113 22:52:12.416761 3231 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.416822 kubelet[3231]: E0113 22:52:12.416813 3231 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wkl2k" Jan 13 22:52:12.416842 kubelet[3231]: E0113 22:52:12.416826 3231 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wkl2k" Jan 13 22:52:12.416865 kubelet[3231]: E0113 22:52:12.416853 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wkl2k_kube-system(e4de7a75-1548-4187-ac18-e9507e8aad8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wkl2k_kube-system(e4de7a75-1548-4187-ac18-e9507e8aad8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wkl2k" podUID="e4de7a75-1548-4187-ac18-e9507e8aad8c" Jan 13 22:52:12.418097 containerd[1800]: time="2025-01-13T22:52:12.418073256Z" level=error msg="Failed to destroy network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.418272 containerd[1800]: time="2025-01-13T22:52:12.418257638Z" level=error msg="encountered an error cleaning up failed sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.418305 containerd[1800]: time="2025-01-13T22:52:12.418294088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qm4qb,Uid:57287e2c-ee93-4725-90f0-5a85ad1e8a1d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.418460 kubelet[3231]: E0113 22:52:12.418441 3231 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.418497 kubelet[3231]: E0113 22:52:12.418478 3231 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qm4qb" Jan 13 22:52:12.418497 kubelet[3231]: E0113 22:52:12.418491 3231 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qm4qb" Jan 13 22:52:12.418534 kubelet[3231]: E0113 22:52:12.418514 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qm4qb_kube-system(57287e2c-ee93-4725-90f0-5a85ad1e8a1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qm4qb_kube-system(57287e2c-ee93-4725-90f0-5a85ad1e8a1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qm4qb" podUID="57287e2c-ee93-4725-90f0-5a85ad1e8a1d" Jan 13 22:52:12.426158 containerd[1800]: time="2025-01-13T22:52:12.426123798Z" level=error msg="Failed to destroy network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426251 containerd[1800]: time="2025-01-13T22:52:12.426163321Z" level=error msg="Failed to destroy network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426316 containerd[1800]: time="2025-01-13T22:52:12.426296846Z" level=error msg="Failed to destroy network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426354 containerd[1800]: time="2025-01-13T22:52:12.426321972Z" level=error msg="encountered an error cleaning up failed sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426354 containerd[1800]: time="2025-01-13T22:52:12.426331236Z" level=error msg="encountered an error cleaning up failed sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426418 containerd[1800]: time="2025-01-13T22:52:12.426350983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-wwzzd,Uid:d95e8d7f-4fe7-40df-b458-d012e6c10560,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426418 containerd[1800]: time="2025-01-13T22:52:12.426356330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-q8lwv,Uid:1da9aa39-afba-437d-9d6b-f05365623329,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426533 containerd[1800]: time="2025-01-13T22:52:12.426476687Z" level=error msg="encountered an error cleaning up failed sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426533 containerd[1800]: time="2025-01-13T22:52:12.426507965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb548d8cc-jp5jw,Uid:b321a1c9-994b-4ad5-b109-33a2a9e1a4fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426595 kubelet[3231]: E0113 22:52:12.426469 3231 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426595 kubelet[3231]: E0113 22:52:12.426493 3231 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426595 kubelet[3231]: E0113 22:52:12.426506 3231 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-q8lwv" Jan 13 22:52:12.426595 kubelet[3231]: E0113 22:52:12.426515 3231 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-wwzzd" Jan 13 22:52:12.426736 kubelet[3231]: E0113 22:52:12.426519 3231 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-q8lwv" Jan 13 22:52:12.426736 kubelet[3231]: E0113 22:52:12.426525 3231 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-wwzzd" Jan 13 22:52:12.426736 kubelet[3231]: E0113 22:52:12.426546 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58f7bcd9bd-wwzzd_calico-apiserver(d95e8d7f-4fe7-40df-b458-d012e6c10560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58f7bcd9bd-wwzzd_calico-apiserver(d95e8d7f-4fe7-40df-b458-d012e6c10560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-wwzzd" podUID="d95e8d7f-4fe7-40df-b458-d012e6c10560" Jan 13 22:52:12.426822 kubelet[3231]: E0113 22:52:12.426546 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58f7bcd9bd-q8lwv_calico-apiserver(1da9aa39-afba-437d-9d6b-f05365623329)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58f7bcd9bd-q8lwv_calico-apiserver(1da9aa39-afba-437d-9d6b-f05365623329)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-q8lwv" podUID="1da9aa39-afba-437d-9d6b-f05365623329" Jan 13 22:52:12.426822 kubelet[3231]: E0113 22:52:12.426584 3231 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.426822 kubelet[3231]: E0113 22:52:12.426605 3231 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb548d8cc-jp5jw" Jan 13 22:52:12.426891 kubelet[3231]: E0113 22:52:12.426616 3231 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fb548d8cc-jp5jw" Jan 13 22:52:12.426891 kubelet[3231]: E0113 22:52:12.426633 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fb548d8cc-jp5jw_calico-system(b321a1c9-994b-4ad5-b109-33a2a9e1a4fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fb548d8cc-jp5jw_calico-system(b321a1c9-994b-4ad5-b109-33a2a9e1a4fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fb548d8cc-jp5jw" podUID="b321a1c9-994b-4ad5-b109-33a2a9e1a4fd" Jan 13 22:52:12.858520 systemd[1]: Created slice kubepods-besteffort-podaf6ee085_5d31_49a0_b9fd_f2776d6a372b.slice - libcontainer container kubepods-besteffort-podaf6ee085_5d31_49a0_b9fd_f2776d6a372b.slice. Jan 13 22:52:12.864123 containerd[1800]: time="2025-01-13T22:52:12.864029866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llzsr,Uid:af6ee085-5d31-49a0-b9fd-f2776d6a372b,Namespace:calico-system,Attempt:0,}" Jan 13 22:52:12.898994 containerd[1800]: time="2025-01-13T22:52:12.898965451Z" level=error msg="Failed to destroy network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.899176 containerd[1800]: time="2025-01-13T22:52:12.899158217Z" level=error msg="encountered an error cleaning up failed sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.899244 containerd[1800]: time="2025-01-13T22:52:12.899212709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llzsr,Uid:af6ee085-5d31-49a0-b9fd-f2776d6a372b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.899420 kubelet[3231]: E0113 22:52:12.899383 3231 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.899666 kubelet[3231]: E0113 22:52:12.899435 3231 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-llzsr" Jan 13 22:52:12.899666 kubelet[3231]: E0113 22:52:12.899447 3231 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-llzsr" Jan 13 22:52:12.899666 kubelet[3231]: E0113 22:52:12.899488 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-llzsr_calico-system(af6ee085-5d31-49a0-b9fd-f2776d6a372b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-llzsr_calico-system(af6ee085-5d31-49a0-b9fd-f2776d6a372b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-llzsr" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" Jan 13 22:52:12.922062 kubelet[3231]: I0113 22:52:12.922023 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:12.922436 containerd[1800]: time="2025-01-13T22:52:12.922417627Z" level=info msg="StopPodSandbox for \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\"" Jan 13 22:52:12.922487 kubelet[3231]: I0113 22:52:12.922443 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:12.922553 containerd[1800]: time="2025-01-13T22:52:12.922541279Z" level=info msg="Ensure that sandbox be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c in task-service has been cleanup successfully" Jan 13 22:52:12.922669 containerd[1800]: time="2025-01-13T22:52:12.922653126Z" level=info msg="StopPodSandbox for \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\"" Jan 13 22:52:12.922749 containerd[1800]: time="2025-01-13T22:52:12.922736239Z" level=info msg="Ensure that sandbox 4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168 in task-service has been cleanup successfully" Jan 13 22:52:12.922867 kubelet[3231]: I0113 22:52:12.922841 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:12.923140 containerd[1800]: time="2025-01-13T22:52:12.923122616Z" level=info msg="StopPodSandbox for \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\"" Jan 13 22:52:12.923253 containerd[1800]: time="2025-01-13T22:52:12.923240012Z" level=info msg="Ensure that sandbox 82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d in task-service has been cleanup successfully" Jan 13 22:52:12.924331 kubelet[3231]: I0113 22:52:12.924311 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:12.924456 containerd[1800]: time="2025-01-13T22:52:12.924437665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 22:52:12.924664 containerd[1800]: time="2025-01-13T22:52:12.924647092Z" level=info msg="StopPodSandbox for \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\"" Jan 13 22:52:12.924823 containerd[1800]: time="2025-01-13T22:52:12.924811741Z" level=info msg="Ensure that sandbox 015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4 in task-service has been cleanup successfully" Jan 13 22:52:12.924930 kubelet[3231]: I0113 22:52:12.924914 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:12.925265 containerd[1800]: time="2025-01-13T22:52:12.925245123Z" level=info msg="StopPodSandbox for \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\"" Jan 13 22:52:12.925398 containerd[1800]: time="2025-01-13T22:52:12.925383389Z" level=info msg="Ensure that sandbox 0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5 in task-service has been cleanup successfully" Jan 13 22:52:12.925535 kubelet[3231]: I0113 22:52:12.925522 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:12.925883 containerd[1800]: time="2025-01-13T22:52:12.925861777Z" level=info msg="StopPodSandbox for \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\"" Jan 13 22:52:12.925991 containerd[1800]: time="2025-01-13T22:52:12.925981669Z" level=info msg="Ensure that sandbox d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9 in task-service has been cleanup successfully" Jan 13 22:52:12.940821 containerd[1800]: time="2025-01-13T22:52:12.940792379Z" level=error msg="StopPodSandbox for \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\" failed" error="failed to destroy network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.940926 containerd[1800]: time="2025-01-13T22:52:12.940845473Z" level=error msg="StopPodSandbox for \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\" failed" error="failed to destroy network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.941020 kubelet[3231]: E0113 22:52:12.940998 3231 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:12.941063 kubelet[3231]: E0113 22:52:12.941036 3231 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168"} Jan 13 22:52:12.941092 kubelet[3231]: E0113 22:52:12.941074 3231 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d95e8d7f-4fe7-40df-b458-d012e6c10560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:52:12.941144 kubelet[3231]: E0113 22:52:12.941089 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d95e8d7f-4fe7-40df-b458-d012e6c10560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-wwzzd" podUID="d95e8d7f-4fe7-40df-b458-d012e6c10560" Jan 13 22:52:12.941144 kubelet[3231]: E0113 22:52:12.940998 3231 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:12.941144 kubelet[3231]: E0113 22:52:12.941108 3231 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c"} Jan 13 22:52:12.941144 kubelet[3231]: E0113 22:52:12.941125 3231 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1da9aa39-afba-437d-9d6b-f05365623329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:52:12.941263 kubelet[3231]: E0113 22:52:12.941147 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1da9aa39-afba-437d-9d6b-f05365623329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-q8lwv" podUID="1da9aa39-afba-437d-9d6b-f05365623329" Jan 13 22:52:12.941354 containerd[1800]: time="2025-01-13T22:52:12.941333527Z" level=error msg="StopPodSandbox for \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\" failed" error="failed to destroy network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.941433 kubelet[3231]: E0113 22:52:12.941421 3231 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:12.941461 kubelet[3231]: E0113 22:52:12.941434 3231 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d"} Jan 13 22:52:12.941461 kubelet[3231]: E0113 22:52:12.941448 3231 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:52:12.941461 kubelet[3231]: E0113 22:52:12.941457 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fb548d8cc-jp5jw" podUID="b321a1c9-994b-4ad5-b109-33a2a9e1a4fd" Jan 13 22:52:12.942107 containerd[1800]: time="2025-01-13T22:52:12.942092808Z" level=error msg="StopPodSandbox for \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\" failed" error="failed to destroy network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.942185 kubelet[3231]: E0113 22:52:12.942163 3231 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:12.942212 kubelet[3231]: E0113 22:52:12.942194 3231 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5"} Jan 13 22:52:12.942230 kubelet[3231]: E0113 22:52:12.942216 3231 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af6ee085-5d31-49a0-b9fd-f2776d6a372b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:52:12.942264 kubelet[3231]: E0113 22:52:12.942228 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af6ee085-5d31-49a0-b9fd-f2776d6a372b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-llzsr" podUID="af6ee085-5d31-49a0-b9fd-f2776d6a372b" Jan 13 22:52:12.942473 containerd[1800]: time="2025-01-13T22:52:12.942458375Z" level=error msg="StopPodSandbox for \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\" failed" error="failed to destroy network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.942540 kubelet[3231]: E0113 22:52:12.942526 3231 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:12.942566 kubelet[3231]: E0113 22:52:12.942543 3231 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4"} Jan 13 22:52:12.942566 kubelet[3231]: E0113 22:52:12.942557 3231 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4de7a75-1548-4187-ac18-e9507e8aad8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:52:12.942608 kubelet[3231]: E0113 22:52:12.942570 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4de7a75-1548-4187-ac18-e9507e8aad8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wkl2k" podUID="e4de7a75-1548-4187-ac18-e9507e8aad8c" Jan 13 22:52:12.942944 containerd[1800]: time="2025-01-13T22:52:12.942927130Z" level=error msg="StopPodSandbox for \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\" failed" error="failed to destroy network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 22:52:12.943012 kubelet[3231]: E0113 22:52:12.942996 3231 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:12.943032 kubelet[3231]: E0113 22:52:12.943018 3231 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9"} Jan 13 22:52:12.943048 kubelet[3231]: E0113 22:52:12.943032 3231 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57287e2c-ee93-4725-90f0-5a85ad1e8a1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 22:52:12.943048 kubelet[3231]: E0113 22:52:12.943042 3231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57287e2c-ee93-4725-90f0-5a85ad1e8a1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qm4qb" podUID="57287e2c-ee93-4725-90f0-5a85ad1e8a1d" Jan 13 22:52:13.160738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c-shm.mount: Deactivated successfully. Jan 13 22:52:13.161026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168-shm.mount: Deactivated successfully. Jan 13 22:52:13.161285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d-shm.mount: Deactivated successfully. Jan 13 22:52:13.161476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9-shm.mount: Deactivated successfully. Jan 13 22:52:13.161645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4-shm.mount: Deactivated successfully. Jan 13 22:52:17.898975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314539967.mount: Deactivated successfully. Jan 13 22:52:17.920209 containerd[1800]: time="2025-01-13T22:52:17.920185670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:17.920445 containerd[1800]: time="2025-01-13T22:52:17.920401564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 22:52:17.920769 containerd[1800]: time="2025-01-13T22:52:17.920728368Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:17.938223 containerd[1800]: time="2025-01-13T22:52:17.938209506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:17.938698 containerd[1800]: time="2025-01-13T22:52:17.938492308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.014029489s" Jan 13 22:52:17.938738 containerd[1800]: time="2025-01-13T22:52:17.938704969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 22:52:17.942845 containerd[1800]: time="2025-01-13T22:52:17.942821328Z" level=info msg="CreateContainer within sandbox \"acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 22:52:17.948358 containerd[1800]: time="2025-01-13T22:52:17.948341454Z" level=info msg="CreateContainer within sandbox \"acf219a8b942445e67b85d371981448fb8ef2bef4c7d1aab25bc3b8cbaba2e80\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"caf15aec52abf6f6b7b5b242ed3f62063bdcbc9a9d166184afcee596751dc833\"" Jan 13 22:52:17.948598 containerd[1800]: time="2025-01-13T22:52:17.948582964Z" level=info msg="StartContainer for \"caf15aec52abf6f6b7b5b242ed3f62063bdcbc9a9d166184afcee596751dc833\"" Jan 13 22:52:17.977474 systemd[1]: Started cri-containerd-caf15aec52abf6f6b7b5b242ed3f62063bdcbc9a9d166184afcee596751dc833.scope - libcontainer container caf15aec52abf6f6b7b5b242ed3f62063bdcbc9a9d166184afcee596751dc833. Jan 13 22:52:17.999061 containerd[1800]: time="2025-01-13T22:52:17.999025363Z" level=info msg="StartContainer for \"caf15aec52abf6f6b7b5b242ed3f62063bdcbc9a9d166184afcee596751dc833\" returns successfully" Jan 13 22:52:18.062223 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 22:52:18.062280 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 22:52:18.945285 kubelet[3231]: I0113 22:52:18.945242 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fglsk" podStartSLOduration=1.807830625 podStartE2EDuration="15.94522872s" podCreationTimestamp="2025-01-13 22:52:03 +0000 UTC" firstStartedPulling="2025-01-13 22:52:03.802041966 +0000 UTC m=+22.012511119" lastFinishedPulling="2025-01-13 22:52:17.939440061 +0000 UTC m=+36.149909214" observedRunningTime="2025-01-13 22:52:18.944748163 +0000 UTC m=+37.155217329" watchObservedRunningTime="2025-01-13 22:52:18.94522872 +0000 UTC m=+37.155697879" Jan 13 22:52:19.937568 kubelet[3231]: I0113 22:52:19.937464 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:52:24.557104 kubelet[3231]: I0113 22:52:24.556977 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:52:25.505229 kernel: bpftool[5080]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 22:52:25.650846 systemd-networkd[1602]: vxlan.calico: Link UP Jan 13 22:52:25.650849 systemd-networkd[1602]: vxlan.calico: Gained carrier Jan 13 22:52:25.842213 containerd[1800]: time="2025-01-13T22:52:25.842147186Z" level=info msg="StopPodSandbox for \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\"" Jan 13 22:52:25.842213 containerd[1800]: time="2025-01-13T22:52:25.842165072Z" level=info msg="StopPodSandbox for \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\"" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.863 [INFO][5203] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.863 [INFO][5203] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" iface="eth0" netns="/var/run/netns/cni-0f9520a2-b8b1-7d69-1e73-7e0d5a9c2526" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.863 [INFO][5203] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" iface="eth0" netns="/var/run/netns/cni-0f9520a2-b8b1-7d69-1e73-7e0d5a9c2526" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.863 [INFO][5203] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" iface="eth0" netns="/var/run/netns/cni-0f9520a2-b8b1-7d69-1e73-7e0d5a9c2526" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.863 [INFO][5203] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.863 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.875 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.875 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.875 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.878 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.878 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.879 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:25.881381 containerd[1800]: 2025-01-13 22:52:25.880 [INFO][5203] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:25.881710 containerd[1800]: time="2025-01-13T22:52:25.881431426Z" level=info msg="TearDown network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\" successfully" Jan 13 22:52:25.881710 containerd[1800]: time="2025-01-13T22:52:25.881450352Z" level=info msg="StopPodSandbox for \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\" returns successfully" Jan 13 22:52:25.881896 containerd[1800]: time="2025-01-13T22:52:25.881853641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-q8lwv,Uid:1da9aa39-afba-437d-9d6b-f05365623329,Namespace:calico-apiserver,Attempt:1,}" Jan 13 22:52:25.883000 systemd[1]: run-netns-cni\x2d0f9520a2\x2db8b1\x2d7d69\x2d1e73\x2d7e0d5a9c2526.mount: Deactivated successfully. Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.863 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.864 [INFO][5202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" iface="eth0" netns="/var/run/netns/cni-47a2868d-fb49-e308-9f6a-5760a8efb441" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.864 [INFO][5202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" iface="eth0" netns="/var/run/netns/cni-47a2868d-fb49-e308-9f6a-5760a8efb441" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.864 [INFO][5202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" iface="eth0" netns="/var/run/netns/cni-47a2868d-fb49-e308-9f6a-5760a8efb441" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.864 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.864 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.875 [INFO][5250] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.875 [INFO][5250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.879 [INFO][5250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.882 [WARNING][5250] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.882 [INFO][5250] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.882 [INFO][5250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:25.884312 containerd[1800]: 2025-01-13 22:52:25.883 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:25.884693 containerd[1800]: time="2025-01-13T22:52:25.884394644Z" level=info msg="TearDown network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\" successfully" Jan 13 22:52:25.884693 containerd[1800]: time="2025-01-13T22:52:25.884406095Z" level=info msg="StopPodSandbox for \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\" returns successfully" Jan 13 22:52:25.884779 containerd[1800]: time="2025-01-13T22:52:25.884765442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb548d8cc-jp5jw,Uid:b321a1c9-994b-4ad5-b109-33a2a9e1a4fd,Namespace:calico-system,Attempt:1,}" Jan 13 22:52:25.885708 systemd[1]: run-netns-cni\x2d47a2868d\x2dfb49\x2de308\x2d9f6a\x2d5760a8efb441.mount: Deactivated successfully. Jan 13 22:52:25.940330 systemd-networkd[1602]: calia4a2c1e9b7d: Link UP Jan 13 22:52:25.940458 systemd-networkd[1602]: calia4a2c1e9b7d: Gained carrier Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.906 [INFO][5275] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0 calico-apiserver-58f7bcd9bd- calico-apiserver 1da9aa39-afba-437d-9d6b-f05365623329 772 0 2025-01-13 22:52:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58f7bcd9bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-66cd838664 calico-apiserver-58f7bcd9bd-q8lwv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia4a2c1e9b7d [] []}} ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.906 [INFO][5275] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.921 [INFO][5317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" HandleID="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.926 [INFO][5317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" HandleID="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cb080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-66cd838664", "pod":"calico-apiserver-58f7bcd9bd-q8lwv", "timestamp":"2025-01-13 22:52:25.921141396 +0000 UTC"}, Hostname:"ci-4081.3.0-a-66cd838664", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.926 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.926 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.926 [INFO][5317] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-66cd838664' Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.927 [INFO][5317] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.929 [INFO][5317] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.931 [INFO][5317] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.932 [INFO][5317] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.933 [INFO][5317] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.933 [INFO][5317] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.934 [INFO][5317] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7 Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.936 [INFO][5317] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.938 [INFO][5317] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.193/26] block=192.168.92.192/26 handle="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.938 [INFO][5317] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.193/26] handle="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.938 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:25.945150 containerd[1800]: 2025-01-13 22:52:25.938 [INFO][5317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.193/26] IPv6=[] ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" HandleID="k8s-pod-network.6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.945746 containerd[1800]: 2025-01-13 22:52:25.939 [INFO][5275] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1da9aa39-afba-437d-9d6b-f05365623329", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"", Pod:"calico-apiserver-58f7bcd9bd-q8lwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4a2c1e9b7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:25.945746 containerd[1800]: 2025-01-13 22:52:25.939 [INFO][5275] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.193/32] ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.945746 containerd[1800]: 2025-01-13 22:52:25.939 [INFO][5275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4a2c1e9b7d ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.945746 containerd[1800]: 2025-01-13 22:52:25.940 [INFO][5275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.945746 containerd[1800]: 2025-01-13 22:52:25.940 [INFO][5275] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1da9aa39-afba-437d-9d6b-f05365623329", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7", Pod:"calico-apiserver-58f7bcd9bd-q8lwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4a2c1e9b7d", MAC:"2a:bc:18:87:6d:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:25.945746 containerd[1800]: 2025-01-13 22:52:25.944 [INFO][5275] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-q8lwv" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:25.954198 systemd-networkd[1602]: cali7ef15327428: Link UP Jan 13 22:52:25.954326 systemd-networkd[1602]: cali7ef15327428: Gained carrier Jan 13 22:52:25.954837 containerd[1800]: time="2025-01-13T22:52:25.954619984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:25.954837 containerd[1800]: time="2025-01-13T22:52:25.954830482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:25.954918 containerd[1800]: time="2025-01-13T22:52:25.954838389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:25.954918 containerd[1800]: time="2025-01-13T22:52:25.954886578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.906 [INFO][5285] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0 calico-kube-controllers-7fb548d8cc- calico-system b321a1c9-994b-4ad5-b109-33a2a9e1a4fd 773 0 2025-01-13 22:52:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fb548d8cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-66cd838664 calico-kube-controllers-7fb548d8cc-jp5jw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7ef15327428 [] []}} ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.906 [INFO][5285] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.921 [INFO][5318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" HandleID="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.927 [INFO][5318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" HandleID="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000719430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-66cd838664", "pod":"calico-kube-controllers-7fb548d8cc-jp5jw", "timestamp":"2025-01-13 22:52:25.921950996 +0000 UTC"}, Hostname:"ci-4081.3.0-a-66cd838664", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.927 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.938 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.938 [INFO][5318] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-66cd838664' Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.939 [INFO][5318] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.941 [INFO][5318] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.944 [INFO][5318] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.945 [INFO][5318] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.946 [INFO][5318] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.946 [INFO][5318] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.947 [INFO][5318] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981 Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.949 [INFO][5318] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.952 [INFO][5318] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.194/26] block=192.168.92.192/26 handle="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.952 [INFO][5318] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.194/26] handle="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.952 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:25.960354 containerd[1800]: 2025-01-13 22:52:25.952 [INFO][5318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.194/26] IPv6=[] ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" HandleID="k8s-pod-network.4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.960823 containerd[1800]: 2025-01-13 22:52:25.953 [INFO][5285] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0", GenerateName:"calico-kube-controllers-7fb548d8cc-", Namespace:"calico-system", SelfLink:"", UID:"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb548d8cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"", Pod:"calico-kube-controllers-7fb548d8cc-jp5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ef15327428", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:25.960823 containerd[1800]: 2025-01-13 22:52:25.953 [INFO][5285] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.194/32] ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.960823 containerd[1800]: 2025-01-13 22:52:25.953 [INFO][5285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ef15327428 ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.960823 containerd[1800]: 2025-01-13 22:52:25.954 [INFO][5285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.960823 containerd[1800]: 2025-01-13 22:52:25.954 [INFO][5285] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0", GenerateName:"calico-kube-controllers-7fb548d8cc-", Namespace:"calico-system", SelfLink:"", UID:"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb548d8cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981", Pod:"calico-kube-controllers-7fb548d8cc-jp5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ef15327428", MAC:"be:ef:5f:a7:ee:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:25.960823 containerd[1800]: 2025-01-13 22:52:25.959 [INFO][5285] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981" Namespace="calico-system" Pod="calico-kube-controllers-7fb548d8cc-jp5jw" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:25.970150 containerd[1800]: time="2025-01-13T22:52:25.969937324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:25.970150 containerd[1800]: time="2025-01-13T22:52:25.970140327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:25.970150 containerd[1800]: time="2025-01-13T22:52:25.970148486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:25.970287 containerd[1800]: time="2025-01-13T22:52:25.970195349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:25.977353 systemd[1]: Started cri-containerd-6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7.scope - libcontainer container 6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7. Jan 13 22:52:25.978919 systemd[1]: Started cri-containerd-4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981.scope - libcontainer container 4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981. Jan 13 22:52:25.999792 containerd[1800]: time="2025-01-13T22:52:25.999769443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-q8lwv,Uid:1da9aa39-afba-437d-9d6b-f05365623329,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7\"" Jan 13 22:52:26.000366 containerd[1800]: time="2025-01-13T22:52:26.000353535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fb548d8cc-jp5jw,Uid:b321a1c9-994b-4ad5-b109-33a2a9e1a4fd,Namespace:calico-system,Attempt:1,} returns sandbox id \"4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981\"" Jan 13 22:52:26.000560 containerd[1800]: time="2025-01-13T22:52:26.000545969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 22:52:26.738455 systemd-networkd[1602]: vxlan.calico: Gained IPv6LL Jan 13 22:52:26.843234 containerd[1800]: time="2025-01-13T22:52:26.843193099Z" level=info msg="StopPodSandbox for \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\"" Jan 13 22:52:26.843559 containerd[1800]: time="2025-01-13T22:52:26.843240400Z" level=info msg="StopPodSandbox for \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\"" Jan 13 22:52:26.843559 containerd[1800]: time="2025-01-13T22:52:26.843326479Z" level=info msg="StopPodSandbox for \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\"" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5506] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" iface="eth0" netns="/var/run/netns/cni-2977013f-b9e2-a392-962f-d1e026e8d194" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5506] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" iface="eth0" netns="/var/run/netns/cni-2977013f-b9e2-a392-962f-d1e026e8d194" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5506] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" iface="eth0" netns="/var/run/netns/cni-2977013f-b9e2-a392-962f-d1e026e8d194" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.878 [INFO][5556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.878 [INFO][5556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.878 [INFO][5556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.881 [WARNING][5556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.881 [INFO][5556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.882 [INFO][5556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:26.883677 containerd[1800]: 2025-01-13 22:52:26.883 [INFO][5506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:26.883957 containerd[1800]: time="2025-01-13T22:52:26.883749118Z" level=info msg="TearDown network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\" successfully" Jan 13 22:52:26.883957 containerd[1800]: time="2025-01-13T22:52:26.883765222Z" level=info msg="StopPodSandbox for \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\" returns successfully" Jan 13 22:52:26.884103 containerd[1800]: time="2025-01-13T22:52:26.884093879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qm4qb,Uid:57287e2c-ee93-4725-90f0-5a85ad1e8a1d,Namespace:kube-system,Attempt:1,}" Jan 13 22:52:26.886138 systemd[1]: run-netns-cni\x2d2977013f\x2db9e2\x2da392\x2d962f\x2dd1e026e8d194.mount: Deactivated successfully. Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.867 [INFO][5505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.867 [INFO][5505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" iface="eth0" netns="/var/run/netns/cni-40313ce9-860e-6b89-d936-1e5ec7b96739" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.867 [INFO][5505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" iface="eth0" netns="/var/run/netns/cni-40313ce9-860e-6b89-d936-1e5ec7b96739" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" iface="eth0" netns="/var/run/netns/cni-40313ce9-860e-6b89-d936-1e5ec7b96739" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.878 [INFO][5554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.878 [INFO][5554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.882 [INFO][5554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.885 [WARNING][5554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.885 [INFO][5554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.887 [INFO][5554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:26.888377 containerd[1800]: 2025-01-13 22:52:26.887 [INFO][5505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:26.888674 containerd[1800]: time="2025-01-13T22:52:26.888471072Z" level=info msg="TearDown network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\" successfully" Jan 13 22:52:26.888674 containerd[1800]: time="2025-01-13T22:52:26.888507867Z" level=info msg="StopPodSandbox for \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\" returns successfully" Jan 13 22:52:26.888907 containerd[1800]: time="2025-01-13T22:52:26.888893037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llzsr,Uid:af6ee085-5d31-49a0-b9fd-f2776d6a372b,Namespace:calico-system,Attempt:1,}" Jan 13 22:52:26.892092 systemd[1]: run-netns-cni\x2d40313ce9\x2d860e\x2d6b89\x2dd936\x2d1e5ec7b96739.mount: Deactivated successfully. Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.867 [INFO][5504] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5504] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" iface="eth0" netns="/var/run/netns/cni-777b4a2f-d808-c69c-1d3e-fbef16f1623e" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5504] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" iface="eth0" netns="/var/run/netns/cni-777b4a2f-d808-c69c-1d3e-fbef16f1623e" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5504] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" iface="eth0" netns="/var/run/netns/cni-777b4a2f-d808-c69c-1d3e-fbef16f1623e" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5504] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.868 [INFO][5504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.878 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.878 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.887 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.890 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.890 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.891 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:26.892370 containerd[1800]: 2025-01-13 22:52:26.891 [INFO][5504] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:26.892627 containerd[1800]: time="2025-01-13T22:52:26.892438098Z" level=info msg="TearDown network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\" successfully" Jan 13 22:52:26.892627 containerd[1800]: time="2025-01-13T22:52:26.892453091Z" level=info msg="StopPodSandbox for \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\" returns successfully" Jan 13 22:52:26.892828 containerd[1800]: time="2025-01-13T22:52:26.892813702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-wwzzd,Uid:d95e8d7f-4fe7-40df-b458-d012e6c10560,Namespace:calico-apiserver,Attempt:1,}" Jan 13 22:52:26.895308 systemd[1]: run-netns-cni\x2d777b4a2f\x2dd808\x2dc69c\x2d1d3e\x2dfbef16f1623e.mount: Deactivated successfully. Jan 13 22:52:26.941650 systemd-networkd[1602]: calie38d77e7b78: Link UP Jan 13 22:52:26.941803 systemd-networkd[1602]: calie38d77e7b78: Gained carrier Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.908 [INFO][5600] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0 coredns-7db6d8ff4d- kube-system 57287e2c-ee93-4725-90f0-5a85ad1e8a1d 787 0 2025-01-13 22:51:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-66cd838664 coredns-7db6d8ff4d-qm4qb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie38d77e7b78 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.908 [INFO][5600] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.923 [INFO][5667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" HandleID="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.928 [INFO][5667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" HandleID="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b9e20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-66cd838664", "pod":"coredns-7db6d8ff4d-qm4qb", "timestamp":"2025-01-13 22:52:26.923081107 +0000 UTC"}, Hostname:"ci-4081.3.0-a-66cd838664", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.928 [INFO][5667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.928 [INFO][5667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.928 [INFO][5667] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-66cd838664' Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.929 [INFO][5667] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.931 [INFO][5667] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.933 [INFO][5667] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.934 [INFO][5667] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.935 [INFO][5667] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.935 [INFO][5667] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.935 [INFO][5667] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.937 [INFO][5667] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.939 [INFO][5667] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.195/26] block=192.168.92.192/26 handle="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.939 [INFO][5667] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.195/26] handle="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:26.946487 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.195/26] IPv6=[] ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" HandleID="k8s-pod-network.04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.946913 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5600] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57287e2c-ee93-4725-90f0-5a85ad1e8a1d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"", Pod:"coredns-7db6d8ff4d-qm4qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie38d77e7b78", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:26.946913 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5600] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.195/32] ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.946913 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie38d77e7b78 ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.946913 containerd[1800]: 2025-01-13 22:52:26.941 [INFO][5600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.946913 containerd[1800]: 2025-01-13 22:52:26.941 [INFO][5600] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57287e2c-ee93-4725-90f0-5a85ad1e8a1d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b", Pod:"coredns-7db6d8ff4d-qm4qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie38d77e7b78", MAC:"ce:17:10:02:43:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:26.946913 containerd[1800]: 2025-01-13 22:52:26.945 [INFO][5600] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qm4qb" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:26.955485 systemd-networkd[1602]: calida392bc47f9: Link UP Jan 13 22:52:26.955634 systemd-networkd[1602]: calida392bc47f9: Gained carrier Jan 13 22:52:26.956975 containerd[1800]: time="2025-01-13T22:52:26.956941042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:26.956975 containerd[1800]: time="2025-01-13T22:52:26.956969376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:26.957059 containerd[1800]: time="2025-01-13T22:52:26.956976911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:26.957059 containerd[1800]: time="2025-01-13T22:52:26.957025322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.909 [INFO][5612] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0 csi-node-driver- calico-system af6ee085-5d31-49a0-b9fd-f2776d6a372b 785 0 2025-01-13 22:52:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-66cd838664 csi-node-driver-llzsr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calida392bc47f9 [] []}} ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.909 [INFO][5612] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.923 [INFO][5672] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" HandleID="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.930 [INFO][5672] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" HandleID="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-66cd838664", "pod":"csi-node-driver-llzsr", "timestamp":"2025-01-13 22:52:26.923927925 +0000 UTC"}, Hostname:"ci-4081.3.0-a-66cd838664", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.930 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5672] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-66cd838664' Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.940 [INFO][5672] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.943 [INFO][5672] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.945 [INFO][5672] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.946 [INFO][5672] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.947 [INFO][5672] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.948 [INFO][5672] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.948 [INFO][5672] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716 Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.950 [INFO][5672] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.952 [INFO][5672] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.196/26] block=192.168.92.192/26 handle="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.952 [INFO][5672] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.196/26] handle="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.952 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:26.974804 containerd[1800]: 2025-01-13 22:52:26.953 [INFO][5672] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.196/26] IPv6=[] ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" HandleID="k8s-pod-network.e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.975207 containerd[1800]: 2025-01-13 22:52:26.953 [INFO][5612] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af6ee085-5d31-49a0-b9fd-f2776d6a372b", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"", Pod:"csi-node-driver-llzsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida392bc47f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:26.975207 containerd[1800]: 2025-01-13 22:52:26.954 [INFO][5612] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.196/32] ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.975207 containerd[1800]: 2025-01-13 22:52:26.954 [INFO][5612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida392bc47f9 ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.975207 containerd[1800]: 2025-01-13 22:52:26.955 [INFO][5612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.975207 containerd[1800]: 2025-01-13 22:52:26.955 [INFO][5612] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af6ee085-5d31-49a0-b9fd-f2776d6a372b", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716", Pod:"csi-node-driver-llzsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida392bc47f9", MAC:"4e:bb:20:04:77:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:26.975207 containerd[1800]: 2025-01-13 22:52:26.974 [INFO][5612] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716" Namespace="calico-system" Pod="csi-node-driver-llzsr" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:26.976341 systemd[1]: Started cri-containerd-04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b.scope - libcontainer container 04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b. Jan 13 22:52:26.983977 containerd[1800]: time="2025-01-13T22:52:26.983914841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:26.983977 containerd[1800]: time="2025-01-13T22:52:26.983949976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:26.983977 containerd[1800]: time="2025-01-13T22:52:26.983957744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:26.984070 containerd[1800]: time="2025-01-13T22:52:26.984002524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:26.984907 systemd-networkd[1602]: cali03e6c15049f: Link UP Jan 13 22:52:26.985025 systemd-networkd[1602]: cali03e6c15049f: Gained carrier Jan 13 22:52:26.990442 systemd[1]: Started cri-containerd-e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716.scope - libcontainer container e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716. Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.913 [INFO][5633] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0 calico-apiserver-58f7bcd9bd- calico-apiserver d95e8d7f-4fe7-40df-b458-d012e6c10560 786 0 2025-01-13 22:52:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58f7bcd9bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-66cd838664 calico-apiserver-58f7bcd9bd-wwzzd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali03e6c15049f [] []}} ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.913 [INFO][5633] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.928 [INFO][5681] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" HandleID="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.932 [INFO][5681] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" HandleID="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000425320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-66cd838664", "pod":"calico-apiserver-58f7bcd9bd-wwzzd", "timestamp":"2025-01-13 22:52:26.928650165 +0000 UTC"}, Hostname:"ci-4081.3.0-a-66cd838664", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.932 [INFO][5681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.952 [INFO][5681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.953 [INFO][5681] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-66cd838664' Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.954 [INFO][5681] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.956 [INFO][5681] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.959 [INFO][5681] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.974 [INFO][5681] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.975 [INFO][5681] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.975 [INFO][5681] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.976 [INFO][5681] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59 Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.979 [INFO][5681] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.982 [INFO][5681] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.197/26] block=192.168.92.192/26 handle="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.982 [INFO][5681] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.197/26] handle="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.983 [INFO][5681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:26.990607 containerd[1800]: 2025-01-13 22:52:26.983 [INFO][5681] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.197/26] IPv6=[] ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" HandleID="k8s-pod-network.4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.991240 containerd[1800]: 2025-01-13 22:52:26.983 [INFO][5633] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d95e8d7f-4fe7-40df-b458-d012e6c10560", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"", Pod:"calico-apiserver-58f7bcd9bd-wwzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03e6c15049f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:26.991240 containerd[1800]: 2025-01-13 22:52:26.984 [INFO][5633] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.197/32] ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.991240 containerd[1800]: 2025-01-13 22:52:26.984 [INFO][5633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03e6c15049f ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.991240 containerd[1800]: 2025-01-13 22:52:26.985 [INFO][5633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:26.991240 containerd[1800]: 2025-01-13 22:52:26.985 [INFO][5633] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d95e8d7f-4fe7-40df-b458-d012e6c10560", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59", Pod:"calico-apiserver-58f7bcd9bd-wwzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03e6c15049f", MAC:"5a:8b:77:cc:66:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:26.991240 containerd[1800]: 2025-01-13 22:52:26.989 [INFO][5633] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59" Namespace="calico-apiserver" Pod="calico-apiserver-58f7bcd9bd-wwzzd" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:27.000464 containerd[1800]: time="2025-01-13T22:52:27.000440473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qm4qb,Uid:57287e2c-ee93-4725-90f0-5a85ad1e8a1d,Namespace:kube-system,Attempt:1,} returns sandbox id \"04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b\"" Jan 13 22:52:27.001493 containerd[1800]: time="2025-01-13T22:52:27.001478352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llzsr,Uid:af6ee085-5d31-49a0-b9fd-f2776d6a372b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716\"" Jan 13 22:52:27.001779 containerd[1800]: time="2025-01-13T22:52:27.001768240Z" level=info msg="CreateContainer within sandbox \"04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:52:27.006919 containerd[1800]: time="2025-01-13T22:52:27.006895507Z" level=info msg="CreateContainer within sandbox \"04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb6a2abeb40fec5576e53475b17acc0f236735690c880ad1b7d7fdaa06c43480\"" Jan 13 22:52:27.007203 containerd[1800]: time="2025-01-13T22:52:27.007188846Z" level=info msg="StartContainer for \"bb6a2abeb40fec5576e53475b17acc0f236735690c880ad1b7d7fdaa06c43480\"" Jan 13 22:52:27.009634 containerd[1800]: time="2025-01-13T22:52:27.009577985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:27.009826 containerd[1800]: time="2025-01-13T22:52:27.009809371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:27.009826 containerd[1800]: time="2025-01-13T22:52:27.009820280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:27.009875 containerd[1800]: time="2025-01-13T22:52:27.009863364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:27.028415 systemd[1]: Started cri-containerd-4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59.scope - libcontainer container 4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59. Jan 13 22:52:27.028963 systemd[1]: Started cri-containerd-bb6a2abeb40fec5576e53475b17acc0f236735690c880ad1b7d7fdaa06c43480.scope - libcontainer container bb6a2abeb40fec5576e53475b17acc0f236735690c880ad1b7d7fdaa06c43480. Jan 13 22:52:27.042372 containerd[1800]: time="2025-01-13T22:52:27.042348189Z" level=info msg="StartContainer for \"bb6a2abeb40fec5576e53475b17acc0f236735690c880ad1b7d7fdaa06c43480\" returns successfully" Jan 13 22:52:27.050989 containerd[1800]: time="2025-01-13T22:52:27.050965238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f7bcd9bd-wwzzd,Uid:d95e8d7f-4fe7-40df-b458-d012e6c10560,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59\"" Jan 13 22:52:27.441424 systemd-networkd[1602]: calia4a2c1e9b7d: Gained IPv6LL Jan 13 22:52:27.697360 systemd-networkd[1602]: cali7ef15327428: Gained IPv6LL Jan 13 22:52:27.842725 containerd[1800]: time="2025-01-13T22:52:27.842686345Z" level=info msg="StopPodSandbox for \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\"" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.872 [INFO][5941] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.872 [INFO][5941] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" iface="eth0" netns="/var/run/netns/cni-fa5a15c5-22ce-e719-9df6-858301c949d6" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.872 [INFO][5941] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" iface="eth0" netns="/var/run/netns/cni-fa5a15c5-22ce-e719-9df6-858301c949d6" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.872 [INFO][5941] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" iface="eth0" netns="/var/run/netns/cni-fa5a15c5-22ce-e719-9df6-858301c949d6" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.872 [INFO][5941] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.872 [INFO][5941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.884 [INFO][5955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.884 [INFO][5955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.884 [INFO][5955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.888 [WARNING][5955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.889 [INFO][5955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.890 [INFO][5955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:27.891446 containerd[1800]: 2025-01-13 22:52:27.890 [INFO][5941] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:27.891921 containerd[1800]: time="2025-01-13T22:52:27.891529270Z" level=info msg="TearDown network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\" successfully" Jan 13 22:52:27.891921 containerd[1800]: time="2025-01-13T22:52:27.891544654Z" level=info msg="StopPodSandbox for \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\" returns successfully" Jan 13 22:52:27.891983 containerd[1800]: time="2025-01-13T22:52:27.891918951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wkl2k,Uid:e4de7a75-1548-4187-ac18-e9507e8aad8c,Namespace:kube-system,Attempt:1,}" Jan 13 22:52:27.893072 systemd[1]: run-netns-cni\x2dfa5a15c5\x2d22ce\x2de719\x2d9df6\x2d858301c949d6.mount: Deactivated successfully. Jan 13 22:52:27.948721 systemd-networkd[1602]: cali4e9033289e0: Link UP Jan 13 22:52:27.948841 systemd-networkd[1602]: cali4e9033289e0: Gained carrier Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.913 [INFO][5970] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0 coredns-7db6d8ff4d- kube-system e4de7a75-1548-4187-ac18-e9507e8aad8c 806 0 2025-01-13 22:51:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-66cd838664 coredns-7db6d8ff4d-wkl2k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e9033289e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.913 [INFO][5970] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.928 [INFO][5988] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" HandleID="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.933 [INFO][5988] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" HandleID="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000137af0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-66cd838664", "pod":"coredns-7db6d8ff4d-wkl2k", "timestamp":"2025-01-13 22:52:27.928929657 +0000 UTC"}, Hostname:"ci-4081.3.0-a-66cd838664", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.933 [INFO][5988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.933 [INFO][5988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.933 [INFO][5988] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-66cd838664' Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.934 [INFO][5988] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.936 [INFO][5988] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.938 [INFO][5988] ipam/ipam.go 489: Trying affinity for 192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.940 [INFO][5988] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.941 [INFO][5988] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.192/26 host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.941 [INFO][5988] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.192/26 handle="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.942 [INFO][5988] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.944 [INFO][5988] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.192/26 handle="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.947 [INFO][5988] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.198/26] block=192.168.92.192/26 handle="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.947 [INFO][5988] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.198/26] handle="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" host="ci-4081.3.0-a-66cd838664" Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.947 [INFO][5988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:27.953541 containerd[1800]: 2025-01-13 22:52:27.947 [INFO][5988] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.198/26] IPv6=[] ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" HandleID="k8s-pod-network.7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.953921 containerd[1800]: 2025-01-13 22:52:27.947 [INFO][5970] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e4de7a75-1548-4187-ac18-e9507e8aad8c", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"", Pod:"coredns-7db6d8ff4d-wkl2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9033289e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:27.953921 containerd[1800]: 2025-01-13 22:52:27.948 [INFO][5970] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.198/32] ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.953921 containerd[1800]: 2025-01-13 22:52:27.948 [INFO][5970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e9033289e0 ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.953921 containerd[1800]: 2025-01-13 22:52:27.948 [INFO][5970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.953921 containerd[1800]: 2025-01-13 22:52:27.948 [INFO][5970] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e4de7a75-1548-4187-ac18-e9507e8aad8c", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea", Pod:"coredns-7db6d8ff4d-wkl2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9033289e0", MAC:"7a:81:b3:06:58:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:27.953921 containerd[1800]: 2025-01-13 22:52:27.952 [INFO][5970] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wkl2k" WorkloadEndpoint="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:27.965420 kubelet[3231]: I0113 22:52:27.965374 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qm4qb" podStartSLOduration=29.965357301 podStartE2EDuration="29.965357301s" podCreationTimestamp="2025-01-13 22:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:52:27.965196884 +0000 UTC m=+46.175666037" watchObservedRunningTime="2025-01-13 22:52:27.965357301 +0000 UTC m=+46.175826452" Jan 13 22:52:28.037423 containerd[1800]: time="2025-01-13T22:52:28.037199475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:52:28.037492 containerd[1800]: time="2025-01-13T22:52:28.037417026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:52:28.037492 containerd[1800]: time="2025-01-13T22:52:28.037428546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:28.037492 containerd[1800]: time="2025-01-13T22:52:28.037473628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:52:28.055323 systemd[1]: Started cri-containerd-7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea.scope - libcontainer container 7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea. Jan 13 22:52:28.079445 containerd[1800]: time="2025-01-13T22:52:28.079418900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wkl2k,Uid:e4de7a75-1548-4187-ac18-e9507e8aad8c,Namespace:kube-system,Attempt:1,} returns sandbox id \"7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea\"" Jan 13 22:52:28.080725 containerd[1800]: time="2025-01-13T22:52:28.080688259Z" level=info msg="CreateContainer within sandbox \"7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 22:52:28.085607 containerd[1800]: time="2025-01-13T22:52:28.085561341Z" level=info msg="CreateContainer within sandbox \"7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16ca99da25c8716646312a3e86748cab6a3dffb047a816d6a9be83e6261467e0\"" Jan 13 22:52:28.085814 containerd[1800]: time="2025-01-13T22:52:28.085801229Z" level=info msg="StartContainer for \"16ca99da25c8716646312a3e86748cab6a3dffb047a816d6a9be83e6261467e0\"" Jan 13 22:52:28.110305 systemd[1]: Started cri-containerd-16ca99da25c8716646312a3e86748cab6a3dffb047a816d6a9be83e6261467e0.scope - libcontainer container 16ca99da25c8716646312a3e86748cab6a3dffb047a816d6a9be83e6261467e0. Jan 13 22:52:28.123451 containerd[1800]: time="2025-01-13T22:52:28.123429357Z" level=info msg="StartContainer for \"16ca99da25c8716646312a3e86748cab6a3dffb047a816d6a9be83e6261467e0\" returns successfully" Jan 13 22:52:28.304413 containerd[1800]: time="2025-01-13T22:52:28.304325071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:28.304600 containerd[1800]: time="2025-01-13T22:52:28.304577493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 22:52:28.304929 containerd[1800]: time="2025-01-13T22:52:28.304916764Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:28.305896 containerd[1800]: time="2025-01-13T22:52:28.305856315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:28.306394 containerd[1800]: time="2025-01-13T22:52:28.306377978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.305698596s" Jan 13 22:52:28.306431 containerd[1800]: time="2025-01-13T22:52:28.306400276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 22:52:28.307565 containerd[1800]: time="2025-01-13T22:52:28.307553016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 22:52:28.307948 containerd[1800]: time="2025-01-13T22:52:28.307936870Z" level=info msg="CreateContainer within sandbox \"6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 22:52:28.311667 containerd[1800]: time="2025-01-13T22:52:28.311625626Z" level=info msg="CreateContainer within sandbox \"6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"63a3ae79530bde01e4e15b96bcb5f649c71bcb196abb97e67260b905a5cbe777\"" Jan 13 22:52:28.311882 containerd[1800]: time="2025-01-13T22:52:28.311841593Z" level=info msg="StartContainer for \"63a3ae79530bde01e4e15b96bcb5f649c71bcb196abb97e67260b905a5cbe777\"" Jan 13 22:52:28.337305 systemd-networkd[1602]: cali03e6c15049f: Gained IPv6LL Jan 13 22:52:28.339492 systemd[1]: Started cri-containerd-63a3ae79530bde01e4e15b96bcb5f649c71bcb196abb97e67260b905a5cbe777.scope - libcontainer container 63a3ae79530bde01e4e15b96bcb5f649c71bcb196abb97e67260b905a5cbe777. Jan 13 22:52:28.368061 containerd[1800]: time="2025-01-13T22:52:28.368034387Z" level=info msg="StartContainer for \"63a3ae79530bde01e4e15b96bcb5f649c71bcb196abb97e67260b905a5cbe777\" returns successfully" Jan 13 22:52:28.529391 systemd-networkd[1602]: calida392bc47f9: Gained IPv6LL Jan 13 22:52:28.529684 systemd-networkd[1602]: calie38d77e7b78: Gained IPv6LL Jan 13 22:52:28.574305 kubelet[3231]: I0113 22:52:28.574206 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:52:28.976861 kubelet[3231]: I0113 22:52:28.976778 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-q8lwv" podStartSLOduration=23.669975616 podStartE2EDuration="25.976752696s" podCreationTimestamp="2025-01-13 22:52:03 +0000 UTC" firstStartedPulling="2025-01-13 22:52:26.000377536 +0000 UTC m=+44.210846703" lastFinishedPulling="2025-01-13 22:52:28.307154628 +0000 UTC m=+46.517623783" observedRunningTime="2025-01-13 22:52:28.976632413 +0000 UTC m=+47.187101584" watchObservedRunningTime="2025-01-13 22:52:28.976752696 +0000 UTC m=+47.187221880" Jan 13 22:52:28.986497 kubelet[3231]: I0113 22:52:28.986423 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wkl2k" podStartSLOduration=30.986399192 podStartE2EDuration="30.986399192s" podCreationTimestamp="2025-01-13 22:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:52:28.986308185 +0000 UTC m=+47.196777367" watchObservedRunningTime="2025-01-13 22:52:28.986399192 +0000 UTC m=+47.196868356" Jan 13 22:52:29.169526 systemd-networkd[1602]: cali4e9033289e0: Gained IPv6LL Jan 13 22:52:29.969608 kubelet[3231]: I0113 22:52:29.969581 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:52:30.461909 containerd[1800]: time="2025-01-13T22:52:30.461860381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:30.462156 containerd[1800]: time="2025-01-13T22:52:30.462012733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 22:52:30.462440 containerd[1800]: time="2025-01-13T22:52:30.462426802Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:30.463452 containerd[1800]: time="2025-01-13T22:52:30.463437607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:30.463877 containerd[1800]: time="2025-01-13T22:52:30.463861981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.15629067s" Jan 13 22:52:30.463911 containerd[1800]: time="2025-01-13T22:52:30.463879337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 22:52:30.464395 containerd[1800]: time="2025-01-13T22:52:30.464381505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 22:52:30.467384 containerd[1800]: time="2025-01-13T22:52:30.467361990Z" level=info msg="CreateContainer within sandbox \"4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 22:52:30.471446 containerd[1800]: time="2025-01-13T22:52:30.471430072Z" level=info msg="CreateContainer within sandbox \"4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5417fa3b37ad6642228df0b837f76cb302d5b19ec5ef2a80f1620287dd117c9c\"" Jan 13 22:52:30.471652 containerd[1800]: time="2025-01-13T22:52:30.471635778Z" level=info msg="StartContainer for \"5417fa3b37ad6642228df0b837f76cb302d5b19ec5ef2a80f1620287dd117c9c\"" Jan 13 22:52:30.498463 systemd[1]: Started cri-containerd-5417fa3b37ad6642228df0b837f76cb302d5b19ec5ef2a80f1620287dd117c9c.scope - libcontainer container 5417fa3b37ad6642228df0b837f76cb302d5b19ec5ef2a80f1620287dd117c9c. Jan 13 22:52:30.526912 containerd[1800]: time="2025-01-13T22:52:30.526886387Z" level=info msg="StartContainer for \"5417fa3b37ad6642228df0b837f76cb302d5b19ec5ef2a80f1620287dd117c9c\" returns successfully" Jan 13 22:52:31.981948 containerd[1800]: time="2025-01-13T22:52:31.981892387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:31.982160 containerd[1800]: time="2025-01-13T22:52:31.982046650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 22:52:31.982564 containerd[1800]: time="2025-01-13T22:52:31.982523149Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:31.983758 containerd[1800]: time="2025-01-13T22:52:31.983716869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:31.984186 containerd[1800]: time="2025-01-13T22:52:31.984141096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.519741824s" Jan 13 22:52:31.984186 containerd[1800]: time="2025-01-13T22:52:31.984158153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 22:52:31.984783 containerd[1800]: time="2025-01-13T22:52:31.984772098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 22:52:31.985477 containerd[1800]: time="2025-01-13T22:52:31.985433729Z" level=info msg="CreateContainer within sandbox \"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 22:52:31.990838 containerd[1800]: time="2025-01-13T22:52:31.990783695Z" level=info msg="CreateContainer within sandbox \"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"33b47f62616285a786fe5a30723ac9239bc3b7d5837746360940cda19d59d40e\"" Jan 13 22:52:31.991049 containerd[1800]: time="2025-01-13T22:52:31.991033025Z" level=info msg="StartContainer for \"33b47f62616285a786fe5a30723ac9239bc3b7d5837746360940cda19d59d40e\"" Jan 13 22:52:32.004159 systemd[1]: Started cri-containerd-33b47f62616285a786fe5a30723ac9239bc3b7d5837746360940cda19d59d40e.scope - libcontainer container 33b47f62616285a786fe5a30723ac9239bc3b7d5837746360940cda19d59d40e. Jan 13 22:52:32.018305 containerd[1800]: time="2025-01-13T22:52:32.018281528Z" level=info msg="StartContainer for \"33b47f62616285a786fe5a30723ac9239bc3b7d5837746360940cda19d59d40e\" returns successfully" Jan 13 22:52:32.021741 kubelet[3231]: I0113 22:52:32.021669 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fb548d8cc-jp5jw" podStartSLOduration=24.558122743 podStartE2EDuration="29.021657391s" podCreationTimestamp="2025-01-13 22:52:03 +0000 UTC" firstStartedPulling="2025-01-13 22:52:26.000786338 +0000 UTC m=+44.211255502" lastFinishedPulling="2025-01-13 22:52:30.464320996 +0000 UTC m=+48.674790150" observedRunningTime="2025-01-13 22:52:31.007982211 +0000 UTC m=+49.218451501" watchObservedRunningTime="2025-01-13 22:52:32.021657391 +0000 UTC m=+50.232126544" Jan 13 22:52:32.314351 containerd[1800]: time="2025-01-13T22:52:32.314259713Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:32.314417 containerd[1800]: time="2025-01-13T22:52:32.314392037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 22:52:32.315771 containerd[1800]: time="2025-01-13T22:52:32.315728776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 330.943018ms" Jan 13 22:52:32.315771 containerd[1800]: time="2025-01-13T22:52:32.315744405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 22:52:32.316246 containerd[1800]: time="2025-01-13T22:52:32.316216604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 22:52:32.316868 containerd[1800]: time="2025-01-13T22:52:32.316854526Z" level=info msg="CreateContainer within sandbox \"4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 22:52:32.321555 containerd[1800]: time="2025-01-13T22:52:32.321512357Z" level=info msg="CreateContainer within sandbox \"4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ade14812b866d7ef8c5f95e3b3e3eeae51d2a8716bc4de428c196b9f23c01a4f\"" Jan 13 22:52:32.321748 containerd[1800]: time="2025-01-13T22:52:32.321707871Z" level=info msg="StartContainer for \"ade14812b866d7ef8c5f95e3b3e3eeae51d2a8716bc4de428c196b9f23c01a4f\"" Jan 13 22:52:32.345422 systemd[1]: Started cri-containerd-ade14812b866d7ef8c5f95e3b3e3eeae51d2a8716bc4de428c196b9f23c01a4f.scope - libcontainer container ade14812b866d7ef8c5f95e3b3e3eeae51d2a8716bc4de428c196b9f23c01a4f. Jan 13 22:52:32.372839 containerd[1800]: time="2025-01-13T22:52:32.372813240Z" level=info msg="StartContainer for \"ade14812b866d7ef8c5f95e3b3e3eeae51d2a8716bc4de428c196b9f23c01a4f\" returns successfully" Jan 13 22:52:33.004159 kubelet[3231]: I0113 22:52:33.004031 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58f7bcd9bd-wwzzd" podStartSLOduration=24.739427962 podStartE2EDuration="30.003994042s" podCreationTimestamp="2025-01-13 22:52:03 +0000 UTC" firstStartedPulling="2025-01-13 22:52:27.051568813 +0000 UTC m=+45.262037965" lastFinishedPulling="2025-01-13 22:52:32.316134892 +0000 UTC m=+50.526604045" observedRunningTime="2025-01-13 22:52:33.003491118 +0000 UTC m=+51.213960338" watchObservedRunningTime="2025-01-13 22:52:33.003994042 +0000 UTC m=+51.214463242" Jan 13 22:52:33.832548 containerd[1800]: time="2025-01-13T22:52:33.832495742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:33.832763 containerd[1800]: time="2025-01-13T22:52:33.832721606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 22:52:33.833096 containerd[1800]: time="2025-01-13T22:52:33.833060299Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:33.834083 containerd[1800]: time="2025-01-13T22:52:33.834041600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:52:33.834535 containerd[1800]: time="2025-01-13T22:52:33.834491184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.51825971s" Jan 13 22:52:33.834535 containerd[1800]: time="2025-01-13T22:52:33.834509686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 22:52:33.835497 containerd[1800]: time="2025-01-13T22:52:33.835485457Z" level=info msg="CreateContainer within sandbox \"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 22:52:33.839951 containerd[1800]: time="2025-01-13T22:52:33.839909545Z" level=info msg="CreateContainer within sandbox \"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d4fafb6caa6f5a3132c60d3ca220a5823e155cd4ca45224ed41533aa2197bc6c\"" Jan 13 22:52:33.840212 containerd[1800]: time="2025-01-13T22:52:33.840161779Z" level=info msg="StartContainer for \"d4fafb6caa6f5a3132c60d3ca220a5823e155cd4ca45224ed41533aa2197bc6c\"" Jan 13 22:52:33.869362 systemd[1]: Started cri-containerd-d4fafb6caa6f5a3132c60d3ca220a5823e155cd4ca45224ed41533aa2197bc6c.scope - libcontainer container d4fafb6caa6f5a3132c60d3ca220a5823e155cd4ca45224ed41533aa2197bc6c. Jan 13 22:52:33.882955 containerd[1800]: time="2025-01-13T22:52:33.882900616Z" level=info msg="StartContainer for \"d4fafb6caa6f5a3132c60d3ca220a5823e155cd4ca45224ed41533aa2197bc6c\" returns successfully" Jan 13 22:52:33.888552 kubelet[3231]: I0113 22:52:33.888538 3231 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 22:52:33.888552 kubelet[3231]: I0113 22:52:33.888555 3231 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 22:52:33.997527 kubelet[3231]: I0113 22:52:33.997437 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:52:34.019606 kubelet[3231]: I0113 22:52:34.019497 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-llzsr" podStartSLOduration=24.186466793 podStartE2EDuration="31.019459554s" podCreationTimestamp="2025-01-13 22:52:03 +0000 UTC" firstStartedPulling="2025-01-13 22:52:27.001886657 +0000 UTC m=+45.212355810" lastFinishedPulling="2025-01-13 22:52:33.834879416 +0000 UTC m=+52.045348571" observedRunningTime="2025-01-13 22:52:34.018613939 +0000 UTC m=+52.229083203" watchObservedRunningTime="2025-01-13 22:52:34.019459554 +0000 UTC m=+52.229928752" Jan 13 22:52:41.839131 containerd[1800]: time="2025-01-13T22:52:41.839037922Z" level=info msg="StopPodSandbox for \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\"" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.857 [WARNING][6485] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0", GenerateName:"calico-kube-controllers-7fb548d8cc-", Namespace:"calico-system", SelfLink:"", UID:"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb548d8cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981", Pod:"calico-kube-controllers-7fb548d8cc-jp5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ef15327428", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.857 [INFO][6485] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.857 [INFO][6485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" iface="eth0" netns="" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.857 [INFO][6485] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.857 [INFO][6485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.868 [INFO][6498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.868 [INFO][6498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.868 [INFO][6498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.872 [WARNING][6498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.872 [INFO][6498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.873 [INFO][6498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:41.874620 containerd[1800]: 2025-01-13 22:52:41.873 [INFO][6485] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.874620 containerd[1800]: time="2025-01-13T22:52:41.874603460Z" level=info msg="TearDown network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\" successfully" Jan 13 22:52:41.874620 containerd[1800]: time="2025-01-13T22:52:41.874618454Z" level=info msg="StopPodSandbox for \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\" returns successfully" Jan 13 22:52:41.874982 containerd[1800]: time="2025-01-13T22:52:41.874851219Z" level=info msg="RemovePodSandbox for \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\"" Jan 13 22:52:41.874982 containerd[1800]: time="2025-01-13T22:52:41.874867080Z" level=info msg="Forcibly stopping sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\"" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.894 [WARNING][6528] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0", GenerateName:"calico-kube-controllers-7fb548d8cc-", Namespace:"calico-system", SelfLink:"", UID:"b321a1c9-994b-4ad5-b109-33a2a9e1a4fd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fb548d8cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"4142e0a76f07a7c53a4a539078fecb01aaba0bf44b1d14bd8312783232ce3981", Pod:"calico-kube-controllers-7fb548d8cc-jp5jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ef15327428", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.894 [INFO][6528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.894 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" iface="eth0" netns="" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.894 [INFO][6528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.894 [INFO][6528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.904 [INFO][6540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.904 [INFO][6540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.905 [INFO][6540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.908 [WARNING][6540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.908 [INFO][6540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" HandleID="k8s-pod-network.82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--kube--controllers--7fb548d8cc--jp5jw-eth0" Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.909 [INFO][6540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:41.911286 containerd[1800]: 2025-01-13 22:52:41.910 [INFO][6528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d" Jan 13 22:52:41.911598 containerd[1800]: time="2025-01-13T22:52:41.911294310Z" level=info msg="TearDown network for sandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\" successfully" Jan 13 22:52:41.912830 containerd[1800]: time="2025-01-13T22:52:41.912788161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:52:41.912830 containerd[1800]: time="2025-01-13T22:52:41.912816447Z" level=info msg="RemovePodSandbox \"82577ef9592246da79e78fc1f042e7290929c43310dd1bad2a79b19c7245217d\" returns successfully" Jan 13 22:52:41.913115 containerd[1800]: time="2025-01-13T22:52:41.913105618Z" level=info msg="StopPodSandbox for \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\"" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.933 [WARNING][6569] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e4de7a75-1548-4187-ac18-e9507e8aad8c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea", Pod:"coredns-7db6d8ff4d-wkl2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9033289e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.933 [INFO][6569] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.933 [INFO][6569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" iface="eth0" netns="" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.933 [INFO][6569] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.933 [INFO][6569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.943 [INFO][6584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.943 [INFO][6584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.944 [INFO][6584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.947 [WARNING][6584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.947 [INFO][6584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.948 [INFO][6584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:41.949656 containerd[1800]: 2025-01-13 22:52:41.949 [INFO][6569] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.950012 containerd[1800]: time="2025-01-13T22:52:41.949679221Z" level=info msg="TearDown network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\" successfully" Jan 13 22:52:41.950012 containerd[1800]: time="2025-01-13T22:52:41.949697460Z" level=info msg="StopPodSandbox for \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\" returns successfully" Jan 13 22:52:41.950012 containerd[1800]: time="2025-01-13T22:52:41.949966823Z" level=info msg="RemovePodSandbox for \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\"" Jan 13 22:52:41.950012 containerd[1800]: time="2025-01-13T22:52:41.949981166Z" level=info msg="Forcibly stopping sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\"" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.967 [WARNING][6611] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e4de7a75-1548-4187-ac18-e9507e8aad8c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"7b0aa0d878d1ec82657ee2d42e818422f866ad43862d91a0eeb384a83420eaea", Pod:"coredns-7db6d8ff4d-wkl2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9033289e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.967 [INFO][6611] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.967 [INFO][6611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" iface="eth0" netns="" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.967 [INFO][6611] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.967 [INFO][6611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.977 [INFO][6627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.977 [INFO][6627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.977 [INFO][6627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.980 [WARNING][6627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.980 [INFO][6627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" HandleID="k8s-pod-network.015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--wkl2k-eth0" Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.981 [INFO][6627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:41.982975 containerd[1800]: 2025-01-13 22:52:41.982 [INFO][6611] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4" Jan 13 22:52:41.983336 containerd[1800]: time="2025-01-13T22:52:41.982999920Z" level=info msg="TearDown network for sandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\" successfully" Jan 13 22:52:41.984320 containerd[1800]: time="2025-01-13T22:52:41.984305305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:52:41.984352 containerd[1800]: time="2025-01-13T22:52:41.984337873Z" level=info msg="RemovePodSandbox \"015b5c2b0be6d49bf2238ce2b1a35393e05b7f0ca3fbbf84ffaa84bdc472d8c4\" returns successfully" Jan 13 22:52:41.984484 containerd[1800]: time="2025-01-13T22:52:41.984472829Z" level=info msg="StopPodSandbox for \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\"" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.002 [WARNING][6658] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1da9aa39-afba-437d-9d6b-f05365623329", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7", Pod:"calico-apiserver-58f7bcd9bd-q8lwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4a2c1e9b7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.003 [INFO][6658] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.003 [INFO][6658] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" iface="eth0" netns="" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.003 [INFO][6658] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.003 [INFO][6658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.012 [INFO][6673] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.012 [INFO][6673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.012 [INFO][6673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.016 [WARNING][6673] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.016 [INFO][6673] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.017 [INFO][6673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.019236 containerd[1800]: 2025-01-13 22:52:42.018 [INFO][6658] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.019236 containerd[1800]: time="2025-01-13T22:52:42.019218553Z" level=info msg="TearDown network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\" successfully" Jan 13 22:52:42.019236 containerd[1800]: time="2025-01-13T22:52:42.019233147Z" level=info msg="StopPodSandbox for \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\" returns successfully" Jan 13 22:52:42.019583 containerd[1800]: time="2025-01-13T22:52:42.019503645Z" level=info msg="RemovePodSandbox for \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\"" Jan 13 22:52:42.019583 containerd[1800]: time="2025-01-13T22:52:42.019520169Z" level=info msg="Forcibly stopping sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\"" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.038 [WARNING][6701] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1da9aa39-afba-437d-9d6b-f05365623329", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"6974ab1f0196b6a14704ca391db7d8804d74ba037dcc62ab1eb82c67616cb1c7", Pod:"calico-apiserver-58f7bcd9bd-q8lwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4a2c1e9b7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.038 [INFO][6701] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.038 [INFO][6701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" iface="eth0" netns="" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.038 [INFO][6701] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.038 [INFO][6701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.049 [INFO][6714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.049 [INFO][6714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.049 [INFO][6714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.053 [WARNING][6714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.053 [INFO][6714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" HandleID="k8s-pod-network.be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--q8lwv-eth0" Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.054 [INFO][6714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.055707 containerd[1800]: 2025-01-13 22:52:42.055 [INFO][6701] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c" Jan 13 22:52:42.056044 containerd[1800]: time="2025-01-13T22:52:42.055726226Z" level=info msg="TearDown network for sandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\" successfully" Jan 13 22:52:42.057227 containerd[1800]: time="2025-01-13T22:52:42.057211436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:52:42.057267 containerd[1800]: time="2025-01-13T22:52:42.057238058Z" level=info msg="RemovePodSandbox \"be27d8f03de71f3a955ff7252492aa6f819f766350b603caa05c51f48799e76c\" returns successfully" Jan 13 22:52:42.057537 containerd[1800]: time="2025-01-13T22:52:42.057495488Z" level=info msg="StopPodSandbox for \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\"" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.075 [WARNING][6743] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d95e8d7f-4fe7-40df-b458-d012e6c10560", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59", Pod:"calico-apiserver-58f7bcd9bd-wwzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03e6c15049f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.075 [INFO][6743] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.075 [INFO][6743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" iface="eth0" netns="" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.075 [INFO][6743] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.075 [INFO][6743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.085 [INFO][6759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.086 [INFO][6759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.086 [INFO][6759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.089 [WARNING][6759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.089 [INFO][6759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.090 [INFO][6759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.092255 containerd[1800]: 2025-01-13 22:52:42.091 [INFO][6743] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.092255 containerd[1800]: time="2025-01-13T22:52:42.092204498Z" level=info msg="TearDown network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\" successfully" Jan 13 22:52:42.092255 containerd[1800]: time="2025-01-13T22:52:42.092220401Z" level=info msg="StopPodSandbox for \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\" returns successfully" Jan 13 22:52:42.092569 containerd[1800]: time="2025-01-13T22:52:42.092519071Z" level=info msg="RemovePodSandbox for \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\"" Jan 13 22:52:42.092569 containerd[1800]: time="2025-01-13T22:52:42.092537404Z" level=info msg="Forcibly stopping sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\"" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.110 [WARNING][6787] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0", GenerateName:"calico-apiserver-58f7bcd9bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d95e8d7f-4fe7-40df-b458-d012e6c10560", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f7bcd9bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"4f9acaeeb52414b559980a5bad0211be7511c66ecc7db5e93d1e20f7f0730b59", Pod:"calico-apiserver-58f7bcd9bd-wwzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03e6c15049f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.110 [INFO][6787] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.110 [INFO][6787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" iface="eth0" netns="" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.110 [INFO][6787] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.110 [INFO][6787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.121 [INFO][6799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.121 [INFO][6799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.121 [INFO][6799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.124 [WARNING][6799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.124 [INFO][6799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" HandleID="k8s-pod-network.4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Workload="ci--4081.3.0--a--66cd838664-k8s-calico--apiserver--58f7bcd9bd--wwzzd-eth0" Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.125 [INFO][6799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.126976 containerd[1800]: 2025-01-13 22:52:42.126 [INFO][6787] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168" Jan 13 22:52:42.127283 containerd[1800]: time="2025-01-13T22:52:42.127000118Z" level=info msg="TearDown network for sandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\" successfully" Jan 13 22:52:42.128278 containerd[1800]: time="2025-01-13T22:52:42.128206710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:52:42.128278 containerd[1800]: time="2025-01-13T22:52:42.128231426Z" level=info msg="RemovePodSandbox \"4aef2bf11f1b59f63a2745d7f6ded44bd81a78d73ab27e249de0a7dd44028168\" returns successfully" Jan 13 22:52:42.128511 containerd[1800]: time="2025-01-13T22:52:42.128467920Z" level=info msg="StopPodSandbox for \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\"" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.147 [WARNING][6828] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57287e2c-ee93-4725-90f0-5a85ad1e8a1d", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b", Pod:"coredns-7db6d8ff4d-qm4qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie38d77e7b78", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.147 [INFO][6828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.147 [INFO][6828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" iface="eth0" netns="" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.147 [INFO][6828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.147 [INFO][6828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.158 [INFO][6841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.158 [INFO][6841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.158 [INFO][6841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.161 [WARNING][6841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.161 [INFO][6841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.162 [INFO][6841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.164412 containerd[1800]: 2025-01-13 22:52:42.163 [INFO][6828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.164721 containerd[1800]: time="2025-01-13T22:52:42.164419333Z" level=info msg="TearDown network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\" successfully" Jan 13 22:52:42.164721 containerd[1800]: time="2025-01-13T22:52:42.164441851Z" level=info msg="StopPodSandbox for \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\" returns successfully" Jan 13 22:52:42.164721 containerd[1800]: time="2025-01-13T22:52:42.164687244Z" level=info msg="RemovePodSandbox for \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\"" Jan 13 22:52:42.164721 containerd[1800]: time="2025-01-13T22:52:42.164708601Z" level=info msg="Forcibly stopping sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\"" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.184 [WARNING][6869] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57287e2c-ee93-4725-90f0-5a85ad1e8a1d", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 51, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"04b1cebeffdb6d9203cccc6f228d22ca129310e9d5591c83ee9b45fb6a6db05b", Pod:"coredns-7db6d8ff4d-qm4qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie38d77e7b78", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.184 [INFO][6869] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.184 [INFO][6869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" iface="eth0" netns="" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.184 [INFO][6869] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.184 [INFO][6869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.194 [INFO][6886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.195 [INFO][6886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.195 [INFO][6886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.199 [WARNING][6886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.199 [INFO][6886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" HandleID="k8s-pod-network.d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Workload="ci--4081.3.0--a--66cd838664-k8s-coredns--7db6d8ff4d--qm4qb-eth0" Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.200 [INFO][6886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.202209 containerd[1800]: 2025-01-13 22:52:42.201 [INFO][6869] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9" Jan 13 22:52:42.202584 containerd[1800]: time="2025-01-13T22:52:42.202212088Z" level=info msg="TearDown network for sandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\" successfully" Jan 13 22:52:42.218488 containerd[1800]: time="2025-01-13T22:52:42.218466439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:52:42.218543 containerd[1800]: time="2025-01-13T22:52:42.218506321Z" level=info msg="RemovePodSandbox \"d5c866a1bb62bc31ee5a8be39f4af19281ab87e035d1bcba308afaa0e14073b9\" returns successfully" Jan 13 22:52:42.218804 containerd[1800]: time="2025-01-13T22:52:42.218792657Z" level=info msg="StopPodSandbox for \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\"" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.237 [WARNING][6932] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af6ee085-5d31-49a0-b9fd-f2776d6a372b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716", Pod:"csi-node-driver-llzsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida392bc47f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.237 [INFO][6932] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.237 [INFO][6932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" iface="eth0" netns="" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.237 [INFO][6932] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.237 [INFO][6932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.247 [INFO][6952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.247 [INFO][6952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.247 [INFO][6952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.250 [WARNING][6952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.250 [INFO][6952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.251 [INFO][6952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.252968 containerd[1800]: 2025-01-13 22:52:42.252 [INFO][6932] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.253260 containerd[1800]: time="2025-01-13T22:52:42.252967241Z" level=info msg="TearDown network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\" successfully" Jan 13 22:52:42.253260 containerd[1800]: time="2025-01-13T22:52:42.252983717Z" level=info msg="StopPodSandbox for \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\" returns successfully" Jan 13 22:52:42.253260 containerd[1800]: time="2025-01-13T22:52:42.253236642Z" level=info msg="RemovePodSandbox for \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\"" Jan 13 22:52:42.253260 containerd[1800]: time="2025-01-13T22:52:42.253251847Z" level=info msg="Forcibly stopping sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\"" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.271 [WARNING][6979] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af6ee085-5d31-49a0-b9fd-f2776d6a372b", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 22, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-66cd838664", ContainerID:"e9f5c644951a86414f937c611888373f0dfb0cfd0afcc693cebba6a59b6a5716", Pod:"csi-node-driver-llzsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida392bc47f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.271 [INFO][6979] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.271 [INFO][6979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" iface="eth0" netns="" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.271 [INFO][6979] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.271 [INFO][6979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.282 [INFO][6994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.282 [INFO][6994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.282 [INFO][6994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.286 [WARNING][6994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.286 [INFO][6994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" HandleID="k8s-pod-network.0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Workload="ci--4081.3.0--a--66cd838664-k8s-csi--node--driver--llzsr-eth0" Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.287 [INFO][6994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 22:52:42.288440 containerd[1800]: 2025-01-13 22:52:42.287 [INFO][6979] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5" Jan 13 22:52:42.288440 containerd[1800]: time="2025-01-13T22:52:42.288435109Z" level=info msg="TearDown network for sandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\" successfully" Jan 13 22:52:42.289784 containerd[1800]: time="2025-01-13T22:52:42.289741104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 22:52:42.289784 containerd[1800]: time="2025-01-13T22:52:42.289771413Z" level=info msg="RemovePodSandbox \"0721ce91f1dc4e5cda303281d3baeb18659f777a76bc5069e61a1c92eac580e5\" returns successfully" Jan 13 22:52:50.065082 kubelet[3231]: I0113 22:52:50.064978 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:52:51.156282 kubelet[3231]: I0113 22:52:51.156223 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 22:53:40.706715 systemd[1]: Started sshd@9-147.28.180.253:22-139.19.117.197:48378.service - OpenSSH per-connection server daemon (139.19.117.197:48378). Jan 13 22:53:41.408424 sshd[7124]: Invalid user admin from 139.19.117.197 port 48378 Jan 13 22:53:50.691879 sshd[7124]: Connection closed by invalid user admin 139.19.117.197 port 48378 [preauth] Jan 13 22:53:50.696682 systemd[1]: sshd@9-147.28.180.253:22-139.19.117.197:48378.service: Deactivated successfully. Jan 13 22:57:35.683500 systemd[1]: Started sshd@10-147.28.180.253:22-139.178.89.65:55016.service - OpenSSH per-connection server daemon (139.178.89.65:55016). Jan 13 22:57:35.711521 sshd[7689]: Accepted publickey for core from 139.178.89.65 port 55016 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:57:35.712480 sshd[7689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:57:35.716234 systemd-logind[1790]: New session 12 of user core. Jan 13 22:57:35.724405 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 22:57:35.861304 sshd[7689]: pam_unix(sshd:session): session closed for user core Jan 13 22:57:35.863723 systemd[1]: sshd@10-147.28.180.253:22-139.178.89.65:55016.service: Deactivated successfully. Jan 13 22:57:35.865233 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 22:57:35.866449 systemd-logind[1790]: Session 12 logged out. Waiting for processes to exit. Jan 13 22:57:35.867408 systemd-logind[1790]: Removed session 12. Jan 13 22:57:40.875036 systemd[1]: Started sshd@11-147.28.180.253:22-139.178.89.65:55032.service - OpenSSH per-connection server daemon (139.178.89.65:55032). Jan 13 22:57:40.907941 sshd[7716]: Accepted publickey for core from 139.178.89.65 port 55032 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:57:40.908782 sshd[7716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:57:40.911960 systemd-logind[1790]: New session 13 of user core. Jan 13 22:57:40.932727 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 22:57:41.028063 sshd[7716]: pam_unix(sshd:session): session closed for user core Jan 13 22:57:41.029854 systemd[1]: sshd@11-147.28.180.253:22-139.178.89.65:55032.service: Deactivated successfully. Jan 13 22:57:41.030912 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 22:57:41.031700 systemd-logind[1790]: Session 13 logged out. Waiting for processes to exit. Jan 13 22:57:41.032274 systemd-logind[1790]: Removed session 13. Jan 13 22:57:46.073682 systemd[1]: Started sshd@12-147.28.180.253:22-139.178.89.65:51174.service - OpenSSH per-connection server daemon (139.178.89.65:51174). Jan 13 22:57:46.111590 sshd[7771]: Accepted publickey for core from 139.178.89.65 port 51174 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:57:46.113161 sshd[7771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:57:46.119421 systemd-logind[1790]: New session 14 of user core. Jan 13 22:57:46.141762 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 22:57:46.236806 sshd[7771]: pam_unix(sshd:session): session closed for user core Jan 13 22:57:46.247876 systemd[1]: sshd@12-147.28.180.253:22-139.178.89.65:51174.service: Deactivated successfully. Jan 13 22:57:46.248664 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 22:57:46.249384 systemd-logind[1790]: Session 14 logged out. Waiting for processes to exit. Jan 13 22:57:46.250106 systemd[1]: Started sshd@13-147.28.180.253:22-139.178.89.65:51190.service - OpenSSH per-connection server daemon (139.178.89.65:51190). Jan 13 22:57:46.250670 systemd-logind[1790]: Removed session 14. Jan 13 22:57:46.281694 sshd[7799]: Accepted publickey for core from 139.178.89.65 port 51190 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:57:46.282730 sshd[7799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:57:46.286762 systemd-logind[1790]: New session 15 of user core. Jan 13 22:57:46.302483 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 22:57:46.411377 sshd[7799]: pam_unix(sshd:session): session closed for user core Jan 13 22:57:46.424105 systemd[1]: sshd@13-147.28.180.253:22-139.178.89.65:51190.service: Deactivated successfully. Jan 13 22:57:46.425011 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 22:57:46.425743 systemd-logind[1790]: Session 15 logged out. Waiting for processes to exit. Jan 13 22:57:46.426543 systemd[1]: Started sshd@14-147.28.180.253:22-139.178.89.65:51200.service - OpenSSH per-connection server daemon (139.178.89.65:51200). Jan 13 22:57:46.426985 systemd-logind[1790]: Removed session 15. Jan 13 22:57:46.457543 sshd[7825]: Accepted publickey for core from 139.178.89.65 port 51200 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:57:46.458333 sshd[7825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:57:46.461528 systemd-logind[1790]: New session 16 of user core. Jan 13 22:57:46.476398 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 22:57:46.623536 sshd[7825]: pam_unix(sshd:session): session closed for user core Jan 13 22:57:46.625092 systemd[1]: sshd@14-147.28.180.253:22-139.178.89.65:51200.service: Deactivated successfully. Jan 13 22:57:46.625986 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 22:57:46.626632 systemd-logind[1790]: Session 16 logged out. Waiting for processes to exit. Jan 13 22:57:46.627135 systemd-logind[1790]: Removed session 16. Jan 13 22:57:51.654822 systemd[1]: Started sshd@15-147.28.180.253:22-139.178.89.65:48356.service - OpenSSH per-connection server daemon (139.178.89.65:48356). Jan 13 22:57:51.738929 sshd[7858]: Accepted publickey for core from 139.178.89.65 port 48356 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:57:51.740732 sshd[7858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:57:51.746812 systemd-logind[1790]: New session 17 of user core. Jan 13 22:57:51.763725 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 22:57:51.916296 sshd[7858]: pam_unix(sshd:session): session closed for user core Jan 13 22:57:51.918166 systemd[1]: sshd@15-147.28.180.253:22-139.178.89.65:48356.service: Deactivated successfully. Jan 13 22:57:51.919227 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 22:57:51.919987 systemd-logind[1790]: Session 17 logged out. Waiting for processes to exit. Jan 13 22:57:51.920687 systemd-logind[1790]: Removed session 17. Jan 13 22:57:56.945365 systemd[1]: Started sshd@16-147.28.180.253:22-139.178.89.65:48370.service - OpenSSH per-connection server daemon (139.178.89.65:48370). Jan 13 22:57:56.973470 sshd[7886]: Accepted publickey for core from 139.178.89.65 port 48370 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:57:56.974257 sshd[7886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:57:56.977458 systemd-logind[1790]: New session 18 of user core. Jan 13 22:57:56.985365 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 22:57:57.070514 sshd[7886]: pam_unix(sshd:session): session closed for user core Jan 13 22:57:57.072068 systemd[1]: sshd@16-147.28.180.253:22-139.178.89.65:48370.service: Deactivated successfully. Jan 13 22:57:57.072973 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 22:57:57.073599 systemd-logind[1790]: Session 18 logged out. Waiting for processes to exit. Jan 13 22:57:57.074144 systemd-logind[1790]: Removed session 18. Jan 13 22:58:02.087839 systemd[1]: Started sshd@17-147.28.180.253:22-139.178.89.65:58502.service - OpenSSH per-connection server daemon (139.178.89.65:58502). Jan 13 22:58:02.117413 sshd[7944]: Accepted publickey for core from 139.178.89.65 port 58502 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:02.118352 sshd[7944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:02.121573 systemd-logind[1790]: New session 19 of user core. Jan 13 22:58:02.139391 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 22:58:02.231645 sshd[7944]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:02.233521 systemd[1]: sshd@17-147.28.180.253:22-139.178.89.65:58502.service: Deactivated successfully. Jan 13 22:58:02.234647 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 22:58:02.235452 systemd-logind[1790]: Session 19 logged out. Waiting for processes to exit. Jan 13 22:58:02.236141 systemd-logind[1790]: Removed session 19. Jan 13 22:58:07.262455 systemd[1]: Started sshd@18-147.28.180.253:22-139.178.89.65:58504.service - OpenSSH per-connection server daemon (139.178.89.65:58504). Jan 13 22:58:07.290651 sshd[7971]: Accepted publickey for core from 139.178.89.65 port 58504 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:07.291306 sshd[7971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:07.293782 systemd-logind[1790]: New session 20 of user core. Jan 13 22:58:07.308589 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 22:58:07.406645 sshd[7971]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:07.414887 systemd[1]: sshd@18-147.28.180.253:22-139.178.89.65:58504.service: Deactivated successfully. Jan 13 22:58:07.415749 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 22:58:07.416508 systemd-logind[1790]: Session 20 logged out. Waiting for processes to exit. Jan 13 22:58:07.417216 systemd[1]: Started sshd@19-147.28.180.253:22-139.178.89.65:58506.service - OpenSSH per-connection server daemon (139.178.89.65:58506). Jan 13 22:58:07.417776 systemd-logind[1790]: Removed session 20. Jan 13 22:58:07.448990 sshd[7997]: Accepted publickey for core from 139.178.89.65 port 58506 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:07.449812 sshd[7997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:07.452895 systemd-logind[1790]: New session 21 of user core. Jan 13 22:58:07.466431 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 22:58:07.723421 sshd[7997]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:07.744974 systemd[1]: sshd@19-147.28.180.253:22-139.178.89.65:58506.service: Deactivated successfully. Jan 13 22:58:07.745756 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 22:58:07.746489 systemd-logind[1790]: Session 21 logged out. Waiting for processes to exit. Jan 13 22:58:07.747028 systemd[1]: Started sshd@20-147.28.180.253:22-139.178.89.65:58512.service - OpenSSH per-connection server daemon (139.178.89.65:58512). Jan 13 22:58:07.747576 systemd-logind[1790]: Removed session 21. Jan 13 22:58:07.777430 sshd[8021]: Accepted publickey for core from 139.178.89.65 port 58512 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:07.778504 sshd[8021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:07.782390 systemd-logind[1790]: New session 22 of user core. Jan 13 22:58:07.796471 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 22:58:09.024982 sshd[8021]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:09.037066 systemd[1]: sshd@20-147.28.180.253:22-139.178.89.65:58512.service: Deactivated successfully. Jan 13 22:58:09.038087 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 22:58:09.038797 systemd-logind[1790]: Session 22 logged out. Waiting for processes to exit. Jan 13 22:58:09.039518 systemd[1]: Started sshd@21-147.28.180.253:22-139.178.89.65:58518.service - OpenSSH per-connection server daemon (139.178.89.65:58518). Jan 13 22:58:09.039966 systemd-logind[1790]: Removed session 22. Jan 13 22:58:09.071755 sshd[8051]: Accepted publickey for core from 139.178.89.65 port 58518 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:09.072934 sshd[8051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:09.076899 systemd-logind[1790]: New session 23 of user core. Jan 13 22:58:09.090497 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 22:58:09.270440 sshd[8051]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:09.288855 systemd[1]: sshd@21-147.28.180.253:22-139.178.89.65:58518.service: Deactivated successfully. Jan 13 22:58:09.289816 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 22:58:09.290488 systemd-logind[1790]: Session 23 logged out. Waiting for processes to exit. Jan 13 22:58:09.291181 systemd[1]: Started sshd@22-147.28.180.253:22-139.178.89.65:58534.service - OpenSSH per-connection server daemon (139.178.89.65:58534). Jan 13 22:58:09.291652 systemd-logind[1790]: Removed session 23. Jan 13 22:58:09.324883 sshd[8078]: Accepted publickey for core from 139.178.89.65 port 58534 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:09.328376 sshd[8078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:09.340016 systemd-logind[1790]: New session 24 of user core. Jan 13 22:58:09.353567 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 22:58:09.500352 sshd[8078]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:09.503010 systemd[1]: sshd@22-147.28.180.253:22-139.178.89.65:58534.service: Deactivated successfully. Jan 13 22:58:09.504288 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 22:58:09.504881 systemd-logind[1790]: Session 24 logged out. Waiting for processes to exit. Jan 13 22:58:09.505866 systemd-logind[1790]: Removed session 24. Jan 13 22:58:14.530354 systemd[1]: Started sshd@23-147.28.180.253:22-139.178.89.65:46772.service - OpenSSH per-connection server daemon (139.178.89.65:46772). Jan 13 22:58:14.558043 sshd[8148]: Accepted publickey for core from 139.178.89.65 port 46772 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:14.558913 sshd[8148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:14.562009 systemd-logind[1790]: New session 25 of user core. Jan 13 22:58:14.573483 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 22:58:14.659633 sshd[8148]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:14.661478 systemd[1]: sshd@23-147.28.180.253:22-139.178.89.65:46772.service: Deactivated successfully. Jan 13 22:58:14.662391 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 22:58:14.662836 systemd-logind[1790]: Session 25 logged out. Waiting for processes to exit. Jan 13 22:58:14.663361 systemd-logind[1790]: Removed session 25. Jan 13 22:58:19.700514 systemd[1]: Started sshd@24-147.28.180.253:22-139.178.89.65:46780.service - OpenSSH per-connection server daemon (139.178.89.65:46780). Jan 13 22:58:19.729444 sshd[8174]: Accepted publickey for core from 139.178.89.65 port 46780 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:19.730574 sshd[8174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:19.734479 systemd-logind[1790]: New session 26 of user core. Jan 13 22:58:19.761835 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 22:58:19.916949 sshd[8174]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:19.918706 systemd[1]: sshd@24-147.28.180.253:22-139.178.89.65:46780.service: Deactivated successfully. Jan 13 22:58:19.919686 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 22:58:19.920372 systemd-logind[1790]: Session 26 logged out. Waiting for processes to exit. Jan 13 22:58:19.920992 systemd-logind[1790]: Removed session 26. Jan 13 22:58:24.935136 systemd[1]: Started sshd@25-147.28.180.253:22-139.178.89.65:37010.service - OpenSSH per-connection server daemon (139.178.89.65:37010). Jan 13 22:58:24.966251 sshd[8200]: Accepted publickey for core from 139.178.89.65 port 37010 ssh2: RSA SHA256:GDDDW1ndOKMoCUf8oANiMHtvnGKmVDnTN4IHrPUlXas Jan 13 22:58:24.969668 sshd[8200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:58:24.980886 systemd-logind[1790]: New session 27 of user core. Jan 13 22:58:25.003693 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 22:58:25.096738 sshd[8200]: pam_unix(sshd:session): session closed for user core Jan 13 22:58:25.098785 systemd[1]: sshd@25-147.28.180.253:22-139.178.89.65:37010.service: Deactivated successfully. Jan 13 22:58:25.099736 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 22:58:25.100150 systemd-logind[1790]: Session 27 logged out. Waiting for processes to exit. Jan 13 22:58:25.100762 systemd-logind[1790]: Removed session 27.