Jan 17 12:23:32.001281 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 17 12:23:32.001295 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:23:32.001302 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:32.001308 kernel: BIOS-provided physical RAM map: Jan 17 12:23:32.001312 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 17 12:23:32.001316 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 17 12:23:32.001321 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 17 12:23:32.001325 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 17 12:23:32.001329 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 17 12:23:32.001333 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819ccfff] usable Jan 17 12:23:32.001337 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] ACPI NVS Jan 17 12:23:32.001342 kernel: BIOS-e820: [mem 0x00000000819ce000-0x00000000819cefff] reserved Jan 17 12:23:32.001346 kernel: BIOS-e820: [mem 0x00000000819cf000-0x000000008afccfff] usable Jan 17 12:23:32.001350 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 17 12:23:32.001356 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 17 12:23:32.001360 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 17 12:23:32.001366 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 17 12:23:32.001371 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 17 12:23:32.001375 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 17 12:23:32.001380 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 17 12:23:32.001385 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 17 12:23:32.001389 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 17 12:23:32.001394 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 17 12:23:32.001399 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 17 12:23:32.001403 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 17 12:23:32.001408 kernel: NX (Execute Disable) protection: active Jan 17 12:23:32.001413 kernel: APIC: Static calls initialized Jan 17 12:23:32.001418 kernel: SMBIOS 3.2.1 present. Jan 17 12:23:32.001423 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Jan 17 12:23:32.001428 kernel: tsc: Detected 3400.000 MHz processor Jan 17 12:23:32.001433 kernel: tsc: Detected 3399.906 MHz TSC Jan 17 12:23:32.001438 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:23:32.001443 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:23:32.001448 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 17 12:23:32.001453 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 17 12:23:32.001458 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:23:32.001462 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 17 12:23:32.001468 kernel: Using GB pages for direct mapping Jan 17 12:23:32.001473 kernel: ACPI: Early table checksum verification disabled Jan 17 12:23:32.001478 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 17 12:23:32.001485 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 17 12:23:32.001490 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 17 12:23:32.001495 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 17 12:23:32.001500 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 17 12:23:32.001506 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 17 12:23:32.001511 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 17 12:23:32.001516 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 17 12:23:32.001521 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 17 12:23:32.001526 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 17 12:23:32.001531 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 17 12:23:32.001536 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 17 12:23:32.001542 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 17 12:23:32.001547 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001552 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 17 12:23:32.001557 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 17 12:23:32.001562 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001567 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001572 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 17 12:23:32.001578 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 17 12:23:32.001583 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001589 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001594 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 17 12:23:32.001599 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:23:32.001604 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 17 12:23:32.001609 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 17 12:23:32.001614 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 17 12:23:32.001619 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 17 12:23:32.001624 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 17 12:23:32.001630 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 17 12:23:32.001635 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 17 12:23:32.001640 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 17 12:23:32.001645 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 17 12:23:32.001650 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 17 12:23:32.001655 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 17 12:23:32.001660 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 17 12:23:32.001665 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 17 12:23:32.001670 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 17 12:23:32.001676 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 17 12:23:32.001681 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 17 12:23:32.001686 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 17 12:23:32.001691 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 17 12:23:32.001696 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 17 12:23:32.001701 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 17 12:23:32.001706 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 17 12:23:32.001711 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 17 12:23:32.001719 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 17 12:23:32.001726 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 17 12:23:32.001731 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 17 12:23:32.001755 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 17 12:23:32.001760 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 17 12:23:32.001765 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 17 12:23:32.001785 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 17 12:23:32.001790 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 17 12:23:32.001795 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 17 12:23:32.001800 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 17 12:23:32.001805 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 17 12:23:32.001811 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 17 12:23:32.001816 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 17 12:23:32.001821 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 17 12:23:32.001826 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 17 12:23:32.001831 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 17 12:23:32.001836 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 17 12:23:32.001841 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 17 12:23:32.001846 kernel: No NUMA configuration found Jan 17 12:23:32.001851 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 17 12:23:32.001857 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 17 12:23:32.001862 kernel: Zone ranges: Jan 17 12:23:32.001868 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:23:32.001873 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:23:32.001878 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 17 12:23:32.001883 kernel: Movable zone start for each node Jan 17 12:23:32.001888 kernel: Early memory node ranges Jan 17 12:23:32.001893 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 17 12:23:32.001898 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 17 12:23:32.001904 kernel: node 0: [mem 0x0000000040400000-0x00000000819ccfff] Jan 17 12:23:32.001909 kernel: node 0: [mem 0x00000000819cf000-0x000000008afccfff] Jan 17 12:23:32.001914 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 17 12:23:32.001919 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 17 12:23:32.001928 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 17 12:23:32.001933 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 17 12:23:32.001938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:23:32.001944 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 17 12:23:32.001950 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 17 12:23:32.001955 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 17 12:23:32.001961 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 17 12:23:32.001966 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 17 12:23:32.001972 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 17 12:23:32.001977 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 17 12:23:32.001982 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 17 12:23:32.001988 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 17 12:23:32.001993 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 17 12:23:32.001999 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 17 12:23:32.002005 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 17 12:23:32.002010 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 17 12:23:32.002015 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 17 12:23:32.002021 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 17 12:23:32.002026 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 17 12:23:32.002031 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 17 12:23:32.002036 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 17 12:23:32.002042 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 17 12:23:32.002048 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 17 12:23:32.002053 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 17 12:23:32.002059 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 17 12:23:32.002064 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 17 12:23:32.002069 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 17 12:23:32.002075 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 17 12:23:32.002080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:23:32.002085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:23:32.002091 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:23:32.002096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:23:32.002102 kernel: TSC deadline timer available Jan 17 12:23:32.002108 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 17 12:23:32.002113 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 17 12:23:32.002119 kernel: Booting paravirtualized kernel on bare hardware Jan 17 12:23:32.002124 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:23:32.002130 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 12:23:32.002135 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 12:23:32.002140 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 12:23:32.002146 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 12:23:32.002153 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:32.002158 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:23:32.002164 kernel: random: crng init done Jan 17 12:23:32.002169 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 17 12:23:32.002174 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 17 12:23:32.002180 kernel: Fallback order for Node 0: 0 Jan 17 12:23:32.002185 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 17 12:23:32.002192 kernel: Policy zone: Normal Jan 17 12:23:32.002197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:23:32.002203 kernel: software IO TLB: area num 16. Jan 17 12:23:32.002208 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 732416K reserved, 0K cma-reserved) Jan 17 12:23:32.002214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 12:23:32.002219 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:23:32.002225 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:23:32.002230 kernel: Dynamic Preempt: voluntary Jan 17 12:23:32.002235 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:23:32.002244 kernel: rcu: RCU event tracing is enabled. Jan 17 12:23:32.002249 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 12:23:32.002255 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:23:32.002260 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:23:32.002266 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:23:32.002271 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:23:32.002277 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 12:23:32.002282 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 17 12:23:32.002287 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:23:32.002293 kernel: Console: colour dummy device 80x25 Jan 17 12:23:32.002299 kernel: printk: console [tty0] enabled Jan 17 12:23:32.002304 kernel: printk: console [ttyS1] enabled Jan 17 12:23:32.002310 kernel: ACPI: Core revision 20230628 Jan 17 12:23:32.002315 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 17 12:23:32.002321 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:23:32.002326 kernel: DMAR: Host address width 39 Jan 17 12:23:32.002332 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 17 12:23:32.002337 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 17 12:23:32.002342 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 17 12:23:32.002349 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 17 12:23:32.002354 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 17 12:23:32.002359 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 17 12:23:32.002365 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 17 12:23:32.002370 kernel: x2apic enabled Jan 17 12:23:32.002375 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 17 12:23:32.002381 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 17 12:23:32.002386 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 17 12:23:32.002392 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 17 12:23:32.002398 kernel: process: using mwait in idle threads Jan 17 12:23:32.002404 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:23:32.002409 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:23:32.002414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:23:32.002420 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:23:32.002425 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:23:32.002430 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 12:23:32.002436 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:23:32.002441 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 17 12:23:32.002446 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 17 12:23:32.002452 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:23:32.002458 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:23:32.002463 kernel: TAA: Mitigation: TSX disabled Jan 17 12:23:32.002469 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 17 12:23:32.002474 kernel: SRBDS: Mitigation: Microcode Jan 17 12:23:32.002480 kernel: GDS: Mitigation: Microcode Jan 17 12:23:32.002485 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:23:32.002490 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:23:32.002496 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:23:32.002501 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 12:23:32.002506 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 12:23:32.002512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:23:32.002518 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 12:23:32.002523 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 12:23:32.002529 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 17 12:23:32.002534 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:23:32.002540 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:23:32.002545 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:23:32.002550 kernel: landlock: Up and running. Jan 17 12:23:32.002556 kernel: SELinux: Initializing. Jan 17 12:23:32.002561 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.002566 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.002572 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 17 12:23:32.002578 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:23:32.002584 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:23:32.002589 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:23:32.002595 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 17 12:23:32.002600 kernel: ... version: 4 Jan 17 12:23:32.002605 kernel: ... bit width: 48 Jan 17 12:23:32.002611 kernel: ... generic registers: 4 Jan 17 12:23:32.002616 kernel: ... value mask: 0000ffffffffffff Jan 17 12:23:32.002621 kernel: ... max period: 00007fffffffffff Jan 17 12:23:32.002628 kernel: ... fixed-purpose events: 3 Jan 17 12:23:32.002633 kernel: ... event mask: 000000070000000f Jan 17 12:23:32.002638 kernel: signal: max sigframe size: 2032 Jan 17 12:23:32.002644 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 17 12:23:32.002649 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:23:32.002655 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:23:32.002660 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 17 12:23:32.002665 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:23:32.002671 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:23:32.002677 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 17 12:23:32.002683 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:23:32.002689 kernel: smp: Brought up 1 node, 16 CPUs Jan 17 12:23:32.002694 kernel: smpboot: Max logical packages: 1 Jan 17 12:23:32.002699 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 17 12:23:32.002705 kernel: devtmpfs: initialized Jan 17 12:23:32.002710 kernel: x86/mm: Memory block size: 128MB Jan 17 12:23:32.002715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cd000-0x819cdfff] (4096 bytes) Jan 17 12:23:32.002759 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 17 12:23:32.002766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:23:32.002785 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 12:23:32.002791 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:23:32.002796 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:23:32.002801 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:23:32.002807 kernel: audit: type=2000 audit(1737116606.039:1): state=initialized audit_enabled=0 res=1 Jan 17 12:23:32.002812 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:23:32.002817 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:23:32.002823 kernel: cpuidle: using governor menu Jan 17 12:23:32.002829 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:23:32.002834 kernel: dca service started, version 1.12.1 Jan 17 12:23:32.002840 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 17 12:23:32.002845 kernel: PCI: Using configuration type 1 for base access Jan 17 12:23:32.002850 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 17 12:23:32.002856 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:23:32.002861 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:23:32.002866 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:23:32.002873 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:23:32.002878 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:23:32.002883 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:23:32.002889 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:23:32.002894 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:23:32.002899 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:23:32.002905 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 17 12:23:32.002910 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002916 kernel: ACPI: SSDT 0xFFFF9FDB01601C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 17 12:23:32.002921 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002927 kernel: ACPI: SSDT 0xFFFF9FDB015F8000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 17 12:23:32.002932 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002938 kernel: ACPI: SSDT 0xFFFF9FDB015E5400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 17 12:23:32.002943 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002948 kernel: ACPI: SSDT 0xFFFF9FDB015FD000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 17 12:23:32.002954 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002959 kernel: ACPI: SSDT 0xFFFF9FDB0160E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 17 12:23:32.002964 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002969 kernel: ACPI: SSDT 0xFFFF9FDB00EEB400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 17 12:23:32.002976 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 17 12:23:32.002981 kernel: ACPI: Interpreter enabled Jan 17 12:23:32.002986 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:23:32.002992 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:23:32.002997 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 17 12:23:32.003002 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 17 12:23:32.003008 kernel: HEST: Table parsing has been initialized. Jan 17 12:23:32.003013 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 17 12:23:32.003019 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:23:32.003025 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:23:32.003030 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 17 12:23:32.003036 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 17 12:23:32.003041 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 17 12:23:32.003047 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 17 12:23:32.003052 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 17 12:23:32.003057 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 17 12:23:32.003063 kernel: ACPI: \_TZ_.FN00: New power resource Jan 17 12:23:32.003068 kernel: ACPI: \_TZ_.FN01: New power resource Jan 17 12:23:32.003074 kernel: ACPI: \_TZ_.FN02: New power resource Jan 17 12:23:32.003080 kernel: ACPI: \_TZ_.FN03: New power resource Jan 17 12:23:32.003085 kernel: ACPI: \_TZ_.FN04: New power resource Jan 17 12:23:32.003091 kernel: ACPI: \PIN_: New power resource Jan 17 12:23:32.003096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 17 12:23:32.003170 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:23:32.003223 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 17 12:23:32.003271 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 17 12:23:32.003280 kernel: PCI host bridge to bus 0000:00 Jan 17 12:23:32.003332 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:23:32.003376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:23:32.003419 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:23:32.003460 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 17 12:23:32.003503 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 17 12:23:32.003544 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 17 12:23:32.003605 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 17 12:23:32.003662 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 17 12:23:32.003712 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.003800 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 17 12:23:32.003848 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 17 12:23:32.003899 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 17 12:23:32.003951 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 17 12:23:32.004003 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 17 12:23:32.004051 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 17 12:23:32.004098 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 17 12:23:32.004150 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 17 12:23:32.004197 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 17 12:23:32.004247 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 17 12:23:32.004300 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 17 12:23:32.004349 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 17 12:23:32.004403 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 17 12:23:32.004451 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 17 12:23:32.004503 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 17 12:23:32.004552 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 17 12:23:32.004603 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 17 12:23:32.004660 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 17 12:23:32.004712 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 17 12:23:32.004800 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 17 12:23:32.004853 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 17 12:23:32.004904 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 17 12:23:32.004954 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 17 12:23:32.005005 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 17 12:23:32.005054 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 17 12:23:32.005100 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 17 12:23:32.005147 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 17 12:23:32.005195 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 17 12:23:32.005242 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 17 12:23:32.005293 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 17 12:23:32.005341 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 17 12:23:32.005395 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 17 12:23:32.005443 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005502 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 17 12:23:32.005550 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005603 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 17 12:23:32.005651 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005704 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 17 12:23:32.005791 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005844 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 17 12:23:32.005892 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005944 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 17 12:23:32.005991 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 17 12:23:32.006045 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 17 12:23:32.006101 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 17 12:23:32.006151 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 17 12:23:32.006200 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 17 12:23:32.006252 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 17 12:23:32.006300 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 17 12:23:32.006356 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 17 12:23:32.006406 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 17 12:23:32.006457 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 17 12:23:32.006508 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 17 12:23:32.006557 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 17 12:23:32.006607 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 17 12:23:32.006662 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 17 12:23:32.006711 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 17 12:23:32.006798 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 17 12:23:32.006848 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 17 12:23:32.006898 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 17 12:23:32.006946 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 17 12:23:32.006996 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:23:32.007044 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 17 12:23:32.007093 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 17 12:23:32.007142 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 17 12:23:32.007194 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 17 12:23:32.007248 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 17 12:23:32.007297 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 17 12:23:32.007347 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 17 12:23:32.007395 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 17 12:23:32.007444 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.007492 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 17 12:23:32.007540 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 17 12:23:32.007591 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 17 12:23:32.007645 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 17 12:23:32.007695 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 17 12:23:32.007767 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 17 12:23:32.007831 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 17 12:23:32.007880 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 17 12:23:32.007930 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.007981 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 17 12:23:32.008030 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 17 12:23:32.008077 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 17 12:23:32.008126 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 17 12:23:32.008181 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 17 12:23:32.008230 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 17 12:23:32.008279 kernel: pci 0000:06:00.0: supports D1 D2 Jan 17 12:23:32.008327 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:23:32.008380 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 17 12:23:32.008427 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.008475 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.008530 kernel: pci_bus 0000:07: extended config space not accessible Jan 17 12:23:32.008588 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 17 12:23:32.008641 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 17 12:23:32.008691 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 17 12:23:32.008770 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 17 12:23:32.008838 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:23:32.008890 kernel: pci 0000:07:00.0: supports D1 D2 Jan 17 12:23:32.008943 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:23:32.008991 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 17 12:23:32.009041 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.009089 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.009100 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 17 12:23:32.009106 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 17 12:23:32.009112 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 17 12:23:32.009117 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 17 12:23:32.009123 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 17 12:23:32.009129 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 17 12:23:32.009134 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 17 12:23:32.009140 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 17 12:23:32.009146 kernel: iommu: Default domain type: Translated Jan 17 12:23:32.009152 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:23:32.009158 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:23:32.009164 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:23:32.009170 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 17 12:23:32.009175 kernel: e820: reserve RAM buffer [mem 0x819cd000-0x83ffffff] Jan 17 12:23:32.009181 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 17 12:23:32.009187 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 17 12:23:32.009192 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 17 12:23:32.009198 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 17 12:23:32.009249 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 17 12:23:32.009299 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 17 12:23:32.009350 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:23:32.009358 kernel: vgaarb: loaded Jan 17 12:23:32.009364 kernel: clocksource: Switched to clocksource tsc-early Jan 17 12:23:32.009370 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:23:32.009376 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:23:32.009382 kernel: pnp: PnP ACPI init Jan 17 12:23:32.009432 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 17 12:23:32.009482 kernel: pnp 00:02: [dma 0 disabled] Jan 17 12:23:32.009530 kernel: pnp 00:03: [dma 0 disabled] Jan 17 12:23:32.009579 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 17 12:23:32.009624 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 17 12:23:32.009672 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 17 12:23:32.009722 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 17 12:23:32.009812 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 17 12:23:32.009856 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 17 12:23:32.009899 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 17 12:23:32.009946 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 17 12:23:32.009989 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 17 12:23:32.010034 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 17 12:23:32.010077 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 17 12:23:32.010128 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 17 12:23:32.010173 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 17 12:23:32.010216 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 17 12:23:32.010260 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 17 12:23:32.010302 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 17 12:23:32.010346 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 17 12:23:32.010391 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 17 12:23:32.010438 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 17 12:23:32.010447 kernel: pnp: PnP ACPI: found 10 devices Jan 17 12:23:32.010453 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:23:32.010459 kernel: NET: Registered PF_INET protocol family Jan 17 12:23:32.010465 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:23:32.010471 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 17 12:23:32.010478 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:23:32.010484 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:23:32.010491 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:23:32.010497 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 17 12:23:32.010502 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.010508 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.010514 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:23:32.010520 kernel: NET: Registered PF_XDP protocol family Jan 17 12:23:32.010567 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 17 12:23:32.010617 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 17 12:23:32.010667 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 17 12:23:32.010720 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010814 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010865 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010915 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010963 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:23:32.011012 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 17 12:23:32.011059 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 17 12:23:32.011110 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 17 12:23:32.011158 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 17 12:23:32.011207 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 17 12:23:32.011255 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 17 12:23:32.011303 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 17 12:23:32.011353 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 17 12:23:32.011401 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 17 12:23:32.011448 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 17 12:23:32.011498 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 17 12:23:32.011548 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.011597 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.011645 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 17 12:23:32.011692 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.011768 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.011835 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 17 12:23:32.011878 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:23:32.011921 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:23:32.011964 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:23:32.012006 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 17 12:23:32.012049 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 17 12:23:32.012098 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 17 12:23:32.012145 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 17 12:23:32.012195 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 17 12:23:32.012240 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 17 12:23:32.012288 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 17 12:23:32.012333 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 17 12:23:32.012382 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 17 12:23:32.012429 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 17 12:23:32.012476 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 17 12:23:32.012521 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 17 12:23:32.012529 kernel: PCI: CLS 64 bytes, default 64 Jan 17 12:23:32.012535 kernel: DMAR: No ATSR found Jan 17 12:23:32.012541 kernel: DMAR: No SATC found Jan 17 12:23:32.012547 kernel: DMAR: dmar0: Using Queued invalidation Jan 17 12:23:32.012595 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 17 12:23:32.012645 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 17 12:23:32.012694 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 17 12:23:32.012770 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 17 12:23:32.012839 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 17 12:23:32.012887 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 17 12:23:32.012935 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 17 12:23:32.012981 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 17 12:23:32.013030 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 17 12:23:32.013076 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 17 12:23:32.013127 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 17 12:23:32.013174 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 17 12:23:32.013223 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 17 12:23:32.013271 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 17 12:23:32.013318 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 17 12:23:32.013367 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 17 12:23:32.013415 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 17 12:23:32.013462 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 17 12:23:32.013512 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 17 12:23:32.013561 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 17 12:23:32.013608 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 17 12:23:32.013658 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 17 12:23:32.013708 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 17 12:23:32.013807 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 17 12:23:32.013856 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 17 12:23:32.013906 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 17 12:23:32.013961 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 17 12:23:32.013969 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 17 12:23:32.013975 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:23:32.013981 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 17 12:23:32.013987 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 17 12:23:32.013993 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 17 12:23:32.013999 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 17 12:23:32.014004 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 17 12:23:32.014053 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 17 12:23:32.014064 kernel: Initialise system trusted keyrings Jan 17 12:23:32.014070 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 17 12:23:32.014075 kernel: Key type asymmetric registered Jan 17 12:23:32.014081 kernel: Asymmetric key parser 'x509' registered Jan 17 12:23:32.014087 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:23:32.014092 kernel: io scheduler mq-deadline registered Jan 17 12:23:32.014098 kernel: io scheduler kyber registered Jan 17 12:23:32.014104 kernel: io scheduler bfq registered Jan 17 12:23:32.014153 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 17 12:23:32.014201 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 17 12:23:32.014251 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 17 12:23:32.014299 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 17 12:23:32.014347 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 17 12:23:32.014395 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 17 12:23:32.014452 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 17 12:23:32.014463 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 17 12:23:32.014469 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 17 12:23:32.014475 kernel: pstore: Using crash dump compression: deflate Jan 17 12:23:32.014481 kernel: pstore: Registered erst as persistent store backend Jan 17 12:23:32.014486 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:23:32.014492 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:23:32.014498 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:23:32.014504 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:23:32.014510 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 17 12:23:32.014560 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 17 12:23:32.014569 kernel: i8042: PNP: No PS/2 controller found. Jan 17 12:23:32.014613 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 17 12:23:32.014658 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 17 12:23:32.014702 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-17T12:23:30 UTC (1737116610) Jan 17 12:23:32.014795 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 17 12:23:32.014804 kernel: intel_pstate: Intel P-state driver initializing Jan 17 12:23:32.014812 kernel: intel_pstate: Disabling energy efficiency optimization Jan 17 12:23:32.014817 kernel: intel_pstate: HWP enabled Jan 17 12:23:32.014823 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 17 12:23:32.014829 kernel: vesafb: scrolling: redraw Jan 17 12:23:32.014835 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 17 12:23:32.014840 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000c9cc047d, using 768k, total 768k Jan 17 12:23:32.014846 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:23:32.014852 kernel: fb0: VESA VGA frame buffer device Jan 17 12:23:32.014858 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:23:32.014863 kernel: Segment Routing with IPv6 Jan 17 12:23:32.014870 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:23:32.014876 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:23:32.014881 kernel: Key type dns_resolver registered Jan 17 12:23:32.014887 kernel: microcode: Microcode Update Driver: v2.2. Jan 17 12:23:32.014893 kernel: IPI shorthand broadcast: enabled Jan 17 12:23:32.014899 kernel: sched_clock: Marking stable (2475082649, 1384705464)->(4403413811, -543625698) Jan 17 12:23:32.014904 kernel: registered taskstats version 1 Jan 17 12:23:32.014910 kernel: Loading compiled-in X.509 certificates Jan 17 12:23:32.014916 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:23:32.014923 kernel: Key type .fscrypt registered Jan 17 12:23:32.014928 kernel: Key type fscrypt-provisioning registered Jan 17 12:23:32.014934 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:23:32.014939 kernel: ima: No architecture policies found Jan 17 12:23:32.014945 kernel: clk: Disabling unused clocks Jan 17 12:23:32.014951 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:23:32.014957 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:23:32.014962 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:23:32.014969 kernel: Run /init as init process Jan 17 12:23:32.014975 kernel: with arguments: Jan 17 12:23:32.014981 kernel: /init Jan 17 12:23:32.014986 kernel: with environment: Jan 17 12:23:32.014992 kernel: HOME=/ Jan 17 12:23:32.014997 kernel: TERM=linux Jan 17 12:23:32.015003 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:23:32.015010 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:23:32.015019 systemd[1]: Detected architecture x86-64. Jan 17 12:23:32.015025 systemd[1]: Running in initrd. Jan 17 12:23:32.015031 systemd[1]: No hostname configured, using default hostname. Jan 17 12:23:32.015037 systemd[1]: Hostname set to <localhost>. Jan 17 12:23:32.015043 systemd[1]: Initializing machine ID from random generator. Jan 17 12:23:32.015049 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:23:32.015055 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:32.015061 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:32.015068 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:23:32.015075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:23:32.015081 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:23:32.015087 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:23:32.015093 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:23:32.015100 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:23:32.015106 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 17 12:23:32.015112 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 17 12:23:32.015118 kernel: clocksource: Switched to clocksource tsc Jan 17 12:23:32.015124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:32.015130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:32.015137 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:23:32.015143 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:23:32.015149 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:23:32.015155 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:23:32.015161 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:32.015168 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:32.015174 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:23:32.015180 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:23:32.015186 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:32.015192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:32.015198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:32.015204 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:23:32.015210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:23:32.015217 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:23:32.015223 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:23:32.015229 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:23:32.015235 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:23:32.015252 systemd-journald[268]: Collecting audit messages is disabled. Jan 17 12:23:32.015267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:23:32.015274 systemd-journald[268]: Journal started Jan 17 12:23:32.015287 systemd-journald[268]: Runtime Journal (/run/log/journal/d5e271653ac248368cca908aa695e3d2) is 8.0M, max 639.9M, 631.9M free. Jan 17 12:23:32.049285 systemd-modules-load[270]: Inserted module 'overlay' Jan 17 12:23:32.058918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:32.079741 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:23:32.079713 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:32.151958 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:23:32.151971 kernel: Bridge firewalling registered Jan 17 12:23:32.136893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:32.140975 systemd-modules-load[270]: Inserted module 'br_netfilter' Jan 17 12:23:32.164027 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:23:32.185109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:32.193119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:32.221962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:32.227073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:23:32.250138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:23:32.279399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:23:32.284936 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:23:32.285574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:23:32.300135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:32.333161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:32.344291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:32.365856 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:32.404986 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:23:32.415533 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:23:32.437570 systemd-resolved[324]: Positive Trust Anchors: Jan 17 12:23:32.460796 dracut-cmdline[307]: dracut-dracut-053 Jan 17 12:23:32.460796 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:32.549253 kernel: SCSI subsystem initialized Jan 17 12:23:32.549271 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:23:32.549282 kernel: iscsi: registered transport (tcp) Jan 17 12:23:32.549290 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:23:32.437577 systemd-resolved[324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:23:32.586985 kernel: QLogic iSCSI HBA Driver Jan 17 12:23:32.437610 systemd-resolved[324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:23:32.439782 systemd-resolved[324]: Defaulting to hostname 'linux'. Jan 17 12:23:32.440501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:23:32.453897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:32.585633 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:32.610822 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:23:32.767735 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:23:32.767781 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:23:32.787692 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:23:32.845783 kernel: raid6: avx2x4 gen() 52949 MB/s Jan 17 12:23:32.877751 kernel: raid6: avx2x2 gen() 53610 MB/s Jan 17 12:23:32.914210 kernel: raid6: avx2x1 gen() 45029 MB/s Jan 17 12:23:32.914226 kernel: raid6: using algorithm avx2x2 gen() 53610 MB/s Jan 17 12:23:32.961251 kernel: raid6: .... xor() 31423 MB/s, rmw enabled Jan 17 12:23:32.961269 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:23:33.002749 kernel: xor: automatically using best checksumming function avx Jan 17 12:23:33.114757 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:23:33.120174 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:33.148073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:33.154677 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jan 17 12:23:33.158912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:33.195889 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:23:33.239685 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jan 17 12:23:33.256526 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:33.268979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:23:33.335586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:33.368315 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:23:33.368394 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Jan 17 12:23:33.379797 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:23:33.411724 kernel: ACPI: bus type USB registered Jan 17 12:23:33.411744 kernel: usbcore: registered new interface driver usbfs Jan 17 12:23:33.432112 kernel: usbcore: registered new interface driver hub Jan 17 12:23:33.434004 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:23:33.472830 kernel: usbcore: registered new device driver usb Jan 17 12:23:33.472845 kernel: PTP clock support registered Jan 17 12:23:33.472853 kernel: libata version 3.00 loaded. Jan 17 12:23:33.470035 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:33.524827 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:23:33.524864 kernel: AES CTR mode by8 optimization enabled Jan 17 12:23:33.524894 kernel: ahci 0000:00:17.0: version 3.0 Jan 17 12:23:34.161209 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 17 12:23:34.161223 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 17 12:23:34.161298 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 17 12:23:34.161306 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 17 12:23:34.161370 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 17 12:23:34.161431 kernel: pps pps0: new PPS source ptp0 Jan 17 12:23:34.161497 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 17 12:23:34.161560 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 17 12:23:34.161625 kernel: scsi host0: ahci Jan 17 12:23:34.161686 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 17 12:23:34.161754 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 17 12:23:34.161815 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 17 12:23:34.161874 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 17 12:23:34.161933 kernel: hub 1-0:1.0: USB hub found Jan 17 12:23:34.162005 kernel: hub 1-0:1.0: 16 ports detected Jan 17 12:23:34.162071 kernel: hub 2-0:1.0: USB hub found Jan 17 12:23:34.162139 kernel: hub 2-0:1.0: 10 ports detected Jan 17 12:23:34.162203 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 17 12:23:34.162265 kernel: scsi host1: ahci Jan 17 12:23:34.162327 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:54 Jan 17 12:23:34.162391 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 17 12:23:34.162453 kernel: scsi host2: ahci Jan 17 12:23:34.162510 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 17 12:23:34.162571 kernel: scsi host3: ahci Jan 17 12:23:34.162630 kernel: pps pps1: new PPS source ptp1 Jan 17 12:23:34.162686 kernel: scsi host4: ahci Jan 17 12:23:34.162750 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 17 12:23:34.162818 kernel: scsi host5: ahci Jan 17 12:23:34.162879 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 17 12:23:34.162940 kernel: scsi host6: ahci Jan 17 12:23:34.162999 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:55 Jan 17 12:23:34.163061 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 17 12:23:34.163070 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 17 12:23:34.163130 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 17 12:23:34.163140 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 17 12:23:34.163201 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 17 12:23:34.163209 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 17 12:23:34.163303 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 17 12:23:34.163312 kernel: hub 1-14:1.0: USB hub found Jan 17 12:23:34.163386 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 17 12:23:34.163394 kernel: hub 1-14:1.0: 4 ports detected Jan 17 12:23:34.163461 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 17 12:23:34.163469 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 17 12:23:33.509870 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:33.654487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:34.228893 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jan 17 12:23:34.762351 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 17 12:23:34.762433 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 17 12:23:34.762543 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:23:34.762553 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762560 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762568 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 17 12:23:34.762638 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 17 12:23:34.762648 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 17 12:23:34.762712 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762726 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762734 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 17 12:23:34.762741 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762749 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 17 12:23:34.762756 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 17 12:23:34.762763 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 17 12:23:34.762773 kernel: ata2.00: Features: NCQ-prio Jan 17 12:23:34.762780 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 17 12:23:34.762787 kernel: ata1.00: Features: NCQ-prio Jan 17 12:23:34.762795 kernel: ata2.00: configured for UDMA/133 Jan 17 12:23:34.762802 kernel: ata1.00: configured for UDMA/133 Jan 17 12:23:34.762809 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 12:23:34.762874 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 17 12:23:34.213852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:23:34.895818 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jan 17 12:23:35.421861 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 17 12:23:35.421963 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 17 12:23:35.422081 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 17 12:23:35.422180 kernel: usbcore: registered new interface driver usbhid Jan 17 12:23:35.422193 kernel: usbhid: USB HID core driver Jan 17 12:23:35.422200 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 17 12:23:35.422265 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 17 12:23:35.422273 kernel: ata2.00: Enabling discard_zeroes_data Jan 17 12:23:35.422280 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.422287 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 17 12:23:35.422346 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 17 12:23:35.422409 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 17 12:23:35.422469 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:23:35.422529 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 17 12:23:35.422588 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:23:35.422682 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 17 12:23:35.422770 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:23:35.422829 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 17 12:23:35.422887 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:23:35.422947 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 17 12:23:35.423022 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 17 12:23:35.423030 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 17 12:23:35.423096 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 17 12:23:35.423155 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 17 12:23:35.423214 kernel: ata2.00: Enabling discard_zeroes_data Jan 17 12:23:35.423222 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 17 12:23:35.423352 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.423360 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 17 12:23:35.423422 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 17 12:23:35.423481 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:23:35.423489 kernel: GPT:9289727 != 937703087 Jan 17 12:23:35.423496 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:23:35.423503 kernel: GPT:9289727 != 937703087 Jan 17 12:23:35.423510 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:23:35.423517 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:35.423526 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:23:35.423585 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 12:23:35.423647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (547) Jan 17 12:23:34.228818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:35.551829 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Jan 17 12:23:35.551918 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (559) Jan 17 12:23:35.551930 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Jan 17 12:23:34.228856 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:34.247879 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:34.265858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:23:34.275800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:34.275829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:34.292530 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:34.319818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:34.329892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:35.721814 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.721829 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:34.340103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:35.739853 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:34.364835 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:35.761849 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:34.373939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:35.780829 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.780841 disk-uuid[731]: Primary Header is updated. Jan 17 12:23:35.780841 disk-uuid[731]: Secondary Entries is updated. Jan 17 12:23:35.780841 disk-uuid[731]: Secondary Header is updated. Jan 17 12:23:35.817824 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:35.425484 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 17 12:23:35.578624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 17 12:23:35.603534 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 17 12:23:35.625886 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 17 12:23:35.642816 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 17 12:23:35.677890 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:23:36.781092 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:36.799747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:36.799795 disk-uuid[732]: The operation has completed successfully. Jan 17 12:23:36.839051 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:23:36.839114 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:23:36.867839 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:23:36.902604 sh[749]: Success Jan 17 12:23:36.931770 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:23:36.975366 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:23:37.000983 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:23:37.002450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:23:37.070173 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:23:37.070194 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:37.090658 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:23:37.109056 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:23:37.126538 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:23:37.162764 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:23:37.163430 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:23:37.163776 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:23:37.177983 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:23:37.179359 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:23:37.253845 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:37.253864 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:37.271647 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:23:37.294351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:37.323373 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:23:37.323389 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:23:37.346724 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:37.353285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:23:37.367847 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:23:37.370883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:23:37.401463 systemd-networkd[933]: lo: Link UP Jan 17 12:23:37.401465 systemd-networkd[933]: lo: Gained carrier Jan 17 12:23:37.403863 systemd-networkd[933]: Enumeration completed Jan 17 12:23:37.403938 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:23:37.404680 systemd-networkd[933]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.449534 ignition[931]: Ignition 2.19.0 Jan 17 12:23:37.413907 systemd[1]: Reached target network.target - Network. Jan 17 12:23:37.449539 ignition[931]: Stage: fetch-offline Jan 17 12:23:37.431274 systemd-networkd[933]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.449559 ignition[931]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:37.451668 unknown[931]: fetched base config from "system" Jan 17 12:23:37.449564 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:37.451672 unknown[931]: fetched user config from "system" Jan 17 12:23:37.449619 ignition[931]: parsed url from cmdline: "" Jan 17 12:23:37.452541 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:37.449621 ignition[931]: no config URL provided Jan 17 12:23:37.458056 systemd-networkd[933]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.449623 ignition[931]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:23:37.466160 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:23:37.449646 ignition[931]: parsing config with SHA512: a189ccd0faf595830c434bbf87d0352769c649c898bd7c1236afc5bf91acdccb4aee2f79620228f4e8b99ed099ca8aa1ae210b498015bfba5bc397b5de30f532 Jan 17 12:23:37.478854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:23:37.451894 ignition[931]: fetch-offline: fetch-offline passed Jan 17 12:23:37.451897 ignition[931]: POST message to Packet Timeline Jan 17 12:23:37.451899 ignition[931]: POST Status error: resource requires networking Jan 17 12:23:37.451934 ignition[931]: Ignition finished successfully Jan 17 12:23:37.488251 ignition[944]: Ignition 2.19.0 Jan 17 12:23:37.488268 ignition[944]: Stage: kargs Jan 17 12:23:37.488454 ignition[944]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:37.488463 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:37.690828 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 17 12:23:37.682328 systemd-networkd[933]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.489218 ignition[944]: kargs: kargs passed Jan 17 12:23:37.489222 ignition[944]: POST message to Packet Timeline Jan 17 12:23:37.489233 ignition[944]: GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:37.489780 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40024->[::1]:53: read: connection refused Jan 17 12:23:37.690157 ignition[944]: GET https://metadata.packet.net/metadata: attempt #2 Jan 17 12:23:37.690864 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55442->[::1]:53: read: connection refused Jan 17 12:23:37.964756 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 17 12:23:37.966165 systemd-networkd[933]: eno1: Link UP Jan 17 12:23:37.966316 systemd-networkd[933]: eno2: Link UP Jan 17 12:23:37.966440 systemd-networkd[933]: enp1s0f0np0: Link UP Jan 17 12:23:37.966604 systemd-networkd[933]: enp1s0f0np0: Gained carrier Jan 17 12:23:37.976881 systemd-networkd[933]: enp1s0f1np1: Link UP Jan 17 12:23:38.009881 systemd-networkd[933]: enp1s0f0np0: DHCPv4 address 147.75.90.1/31, gateway 147.75.90.0 acquired from 145.40.83.140 Jan 17 12:23:38.091812 ignition[944]: GET https://metadata.packet.net/metadata: attempt #3 Jan 17 12:23:38.093033 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36433->[::1]:53: read: connection refused Jan 17 12:23:38.690517 systemd-networkd[933]: enp1s0f1np1: Gained carrier Jan 17 12:23:38.893374 ignition[944]: GET https://metadata.packet.net/metadata: attempt #4 Jan 17 12:23:38.894602 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59199->[::1]:53: read: connection refused Jan 17 12:23:39.074325 systemd-networkd[933]: enp1s0f0np0: Gained IPv6LL Jan 17 12:23:40.098326 systemd-networkd[933]: enp1s0f1np1: Gained IPv6LL Jan 17 12:23:40.496076 ignition[944]: GET https://metadata.packet.net/metadata: attempt #5 Jan 17 12:23:40.497248 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52231->[::1]:53: read: connection refused Jan 17 12:23:43.699866 ignition[944]: GET https://metadata.packet.net/metadata: attempt #6 Jan 17 12:23:45.125080 ignition[944]: GET result: OK Jan 17 12:23:45.496557 ignition[944]: Ignition finished successfully Jan 17 12:23:45.501286 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:23:45.527018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:23:45.533012 ignition[964]: Ignition 2.19.0 Jan 17 12:23:45.533016 ignition[964]: Stage: disks Jan 17 12:23:45.533117 ignition[964]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:45.533123 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:45.533605 ignition[964]: disks: disks passed Jan 17 12:23:45.533607 ignition[964]: POST message to Packet Timeline Jan 17 12:23:45.533615 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:46.591930 ignition[964]: GET result: OK Jan 17 12:23:46.920777 ignition[964]: Ignition finished successfully Jan 17 12:23:46.923069 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:23:46.940043 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:46.959003 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:23:46.981038 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:47.002133 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:23:47.022027 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:23:47.052000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:23:47.088013 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:23:47.099157 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:23:47.121016 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:23:47.217778 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:23:47.217755 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:23:47.227231 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:23:47.259946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:47.380990 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (993) Jan 17 12:23:47.381003 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:47.381015 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:47.381022 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:23:47.381104 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:23:47.381111 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:23:47.288509 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:23:47.399245 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:23:47.411328 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 17 12:23:47.422001 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:23:47.422030 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:47.440156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:47.503986 coreos-metadata[1010]: Jan 17 12:23:47.494 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 17 12:23:47.477856 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:23:47.546849 coreos-metadata[1011]: Jan 17 12:23:47.494 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 17 12:23:47.517955 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:23:47.571913 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:23:47.582842 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:23:47.592834 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:23:47.604004 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:23:47.602112 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:47.630982 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:23:47.635602 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:23:47.676850 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:47.669581 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:23:47.685884 ignition[1113]: INFO : Ignition 2.19.0 Jan 17 12:23:47.685884 ignition[1113]: INFO : Stage: mount Jan 17 12:23:47.685884 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:47.685884 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:47.685884 ignition[1113]: INFO : mount: mount passed Jan 17 12:23:47.685884 ignition[1113]: INFO : POST message to Packet Timeline Jan 17 12:23:47.685884 ignition[1113]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:47.687624 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:23:48.438075 ignition[1113]: INFO : GET result: OK Jan 17 12:23:48.514841 coreos-metadata[1010]: Jan 17 12:23:48.514 INFO Fetch successful Jan 17 12:23:48.552693 coreos-metadata[1010]: Jan 17 12:23:48.552 INFO wrote hostname ci-4081.3.0-a-4c6521d577 to /sysroot/etc/hostname Jan 17 12:23:48.554198 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:23:48.887056 coreos-metadata[1011]: Jan 17 12:23:48.886 INFO Fetch successful Jan 17 12:23:48.965062 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 17 12:23:48.965124 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 17 12:23:49.485592 ignition[1113]: INFO : Ignition finished successfully Jan 17 12:23:49.488680 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:23:49.518949 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:23:49.529878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:49.581781 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1139) Jan 17 12:23:49.581812 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:49.609771 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:49.626596 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:23:49.663120 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:23:49.663142 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:23:49.675742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:49.703293 ignition[1156]: INFO : Ignition 2.19.0 Jan 17 12:23:49.703293 ignition[1156]: INFO : Stage: files Jan 17 12:23:49.716968 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:49.716968 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:49.716968 ignition[1156]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:23:49.716968 ignition[1156]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:23:49.707596 unknown[1156]: wrote ssh authorized keys file for user: core Jan 17 12:23:49.850798 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:23:49.873623 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 17 12:23:50.352385 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:23:50.562965 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:50.562965 ignition[1156]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: files passed Jan 17 12:23:50.591962 ignition[1156]: INFO : POST message to Packet Timeline Jan 17 12:23:50.591962 ignition[1156]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:51.733102 ignition[1156]: INFO : GET result: OK Jan 17 12:23:52.328618 ignition[1156]: INFO : Ignition finished successfully Jan 17 12:23:52.331667 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:23:52.361289 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:23:52.372795 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:23:52.398680 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:23:52.398939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:23:52.447335 initrd-setup-root-after-ignition[1195]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.447335 initrd-setup-root-after-ignition[1195]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.486001 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.451962 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:52.463016 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:23:52.511227 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:23:52.599871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:23:52.600158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:23:52.621402 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:23:52.641975 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:23:52.663220 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:23:52.677152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:23:52.748371 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:52.775155 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:23:52.803878 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:52.815255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:52.836453 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:23:52.855311 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:23:52.855712 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:52.894149 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:23:52.904342 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:23:52.923349 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:23:52.941345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:52.962345 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:52.983350 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:23:53.003331 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:53.024366 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:23:53.045354 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:23:53.065276 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:23:53.083229 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:23:53.083631 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:53.120200 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:53.130351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:53.151206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:23:53.151655 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:53.175221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:23:53.175617 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:53.207303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:23:53.207780 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:53.228539 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:23:53.246206 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:23:53.246626 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:53.268443 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:23:53.286333 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:23:53.304302 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:23:53.304609 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:53.324361 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:23:53.324661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:53.347405 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:23:53.347830 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:53.367421 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:23:53.483953 ignition[1219]: INFO : Ignition 2.19.0 Jan 17 12:23:53.483953 ignition[1219]: INFO : Stage: umount Jan 17 12:23:53.483953 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:53.483953 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:53.483953 ignition[1219]: INFO : umount: umount passed Jan 17 12:23:53.483953 ignition[1219]: INFO : POST message to Packet Timeline Jan 17 12:23:53.483953 ignition[1219]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:53.367820 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:23:53.385397 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:23:53.385814 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:23:53.417999 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:23:53.435973 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:23:53.436418 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:53.463956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:23:53.475801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:23:53.475874 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:53.496010 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:23:53.496090 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:53.546470 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:23:53.548334 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:23:53.548597 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:23:53.559749 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:23:53.560006 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:23:54.817408 ignition[1219]: INFO : GET result: OK Jan 17 12:23:55.356264 ignition[1219]: INFO : Ignition finished successfully Jan 17 12:23:55.359456 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:23:55.359769 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:23:55.375949 systemd[1]: Stopped target network.target - Network. Jan 17 12:23:55.390951 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:23:55.391123 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:23:55.409094 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:23:55.409257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:23:55.427128 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:23:55.427281 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:23:55.446219 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:23:55.446378 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:55.465102 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:23:55.465267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:55.484496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:23:55.493877 systemd-networkd[933]: enp1s0f0np0: DHCPv6 lease lost Jan 17 12:23:55.502195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:23:55.505905 systemd-networkd[933]: enp1s0f1np1: DHCPv6 lease lost Jan 17 12:23:55.520766 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:23:55.521046 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:23:55.540022 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:23:55.540369 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:23:55.561374 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:23:55.561492 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:55.597004 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:23:55.617897 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:23:55.617942 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:55.626074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:23:55.626119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:55.657129 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:23:55.657270 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:55.675181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:23:55.675348 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:55.683587 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:55.719091 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:23:55.719167 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:55.742488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:23:55.742533 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:55.751100 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:23:55.751136 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:55.778979 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:23:55.779208 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:55.819935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:23:55.820195 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:55.858888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:55.859139 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:55.903211 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:23:55.914133 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:23:56.133929 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Jan 17 12:23:55.914288 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:55.953105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:55.953246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:55.976163 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:23:55.976482 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:23:56.015609 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:23:56.015963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:23:56.035111 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:23:56.069942 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:23:56.074439 systemd[1]: Switching root. Jan 17 12:23:56.229803 systemd-journald[268]: Journal stopped Jan 17 12:23:32.001281 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Jan 17 12:23:32.001295 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:23:32.001302 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:32.001308 kernel: BIOS-provided physical RAM map: Jan 17 12:23:32.001312 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 17 12:23:32.001316 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 17 12:23:32.001321 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 17 12:23:32.001325 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 17 12:23:32.001329 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 17 12:23:32.001333 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819ccfff] usable Jan 17 12:23:32.001337 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] ACPI NVS Jan 17 12:23:32.001342 kernel: BIOS-e820: [mem 0x00000000819ce000-0x00000000819cefff] reserved Jan 17 12:23:32.001346 kernel: BIOS-e820: [mem 0x00000000819cf000-0x000000008afccfff] usable Jan 17 12:23:32.001350 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 17 12:23:32.001356 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 17 12:23:32.001360 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 17 12:23:32.001366 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 17 12:23:32.001371 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 17 12:23:32.001375 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 17 12:23:32.001380 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 17 12:23:32.001385 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 17 12:23:32.001389 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 17 12:23:32.001394 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 17 12:23:32.001399 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 17 12:23:32.001403 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 17 12:23:32.001408 kernel: NX (Execute Disable) protection: active Jan 17 12:23:32.001413 kernel: APIC: Static calls initialized Jan 17 12:23:32.001418 kernel: SMBIOS 3.2.1 present. Jan 17 12:23:32.001423 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Jan 17 12:23:32.001428 kernel: tsc: Detected 3400.000 MHz processor Jan 17 12:23:32.001433 kernel: tsc: Detected 3399.906 MHz TSC Jan 17 12:23:32.001438 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:23:32.001443 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:23:32.001448 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 17 12:23:32.001453 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 17 12:23:32.001458 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:23:32.001462 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 17 12:23:32.001468 kernel: Using GB pages for direct mapping Jan 17 12:23:32.001473 kernel: ACPI: Early table checksum verification disabled Jan 17 12:23:32.001478 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 17 12:23:32.001485 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 17 12:23:32.001490 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 17 12:23:32.001495 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 17 12:23:32.001500 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 17 12:23:32.001506 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 17 12:23:32.001511 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 17 12:23:32.001516 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 17 12:23:32.001521 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 17 12:23:32.001526 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 17 12:23:32.001531 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 17 12:23:32.001536 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 17 12:23:32.001542 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 17 12:23:32.001547 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001552 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 17 12:23:32.001557 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 17 12:23:32.001562 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001567 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001572 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 17 12:23:32.001578 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 17 12:23:32.001583 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001589 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 17 12:23:32.001594 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 17 12:23:32.001599 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:23:32.001604 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 17 12:23:32.001609 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 17 12:23:32.001614 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 17 12:23:32.001619 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 17 12:23:32.001624 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 17 12:23:32.001630 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 17 12:23:32.001635 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 17 12:23:32.001640 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 17 12:23:32.001645 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 17 12:23:32.001650 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 17 12:23:32.001655 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 17 12:23:32.001660 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 17 12:23:32.001665 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 17 12:23:32.001670 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 17 12:23:32.001676 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 17 12:23:32.001681 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 17 12:23:32.001686 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 17 12:23:32.001691 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 17 12:23:32.001696 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 17 12:23:32.001701 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 17 12:23:32.001706 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 17 12:23:32.001711 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 17 12:23:32.001719 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 17 12:23:32.001726 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 17 12:23:32.001731 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 17 12:23:32.001755 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 17 12:23:32.001760 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 17 12:23:32.001765 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 17 12:23:32.001785 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 17 12:23:32.001790 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 17 12:23:32.001795 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 17 12:23:32.001800 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 17 12:23:32.001805 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 17 12:23:32.001811 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 17 12:23:32.001816 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 17 12:23:32.001821 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 17 12:23:32.001826 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 17 12:23:32.001831 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 17 12:23:32.001836 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 17 12:23:32.001841 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 17 12:23:32.001846 kernel: No NUMA configuration found Jan 17 12:23:32.001851 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 17 12:23:32.001857 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 17 12:23:32.001862 kernel: Zone ranges: Jan 17 12:23:32.001868 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:23:32.001873 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:23:32.001878 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 17 12:23:32.001883 kernel: Movable zone start for each node Jan 17 12:23:32.001888 kernel: Early memory node ranges Jan 17 12:23:32.001893 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 17 12:23:32.001898 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 17 12:23:32.001904 kernel: node 0: [mem 0x0000000040400000-0x00000000819ccfff] Jan 17 12:23:32.001909 kernel: node 0: [mem 0x00000000819cf000-0x000000008afccfff] Jan 17 12:23:32.001914 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 17 12:23:32.001919 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 17 12:23:32.001928 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 17 12:23:32.001933 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 17 12:23:32.001938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:23:32.001944 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 17 12:23:32.001950 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 17 12:23:32.001955 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 17 12:23:32.001961 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 17 12:23:32.001966 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 17 12:23:32.001972 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 17 12:23:32.001977 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 17 12:23:32.001982 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 17 12:23:32.001988 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 17 12:23:32.001993 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 17 12:23:32.001999 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 17 12:23:32.002005 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 17 12:23:32.002010 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 17 12:23:32.002015 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 17 12:23:32.002021 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 17 12:23:32.002026 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 17 12:23:32.002031 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 17 12:23:32.002036 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 17 12:23:32.002042 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 17 12:23:32.002048 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 17 12:23:32.002053 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 17 12:23:32.002059 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 17 12:23:32.002064 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 17 12:23:32.002069 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 17 12:23:32.002075 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 17 12:23:32.002080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:23:32.002085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:23:32.002091 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:23:32.002096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:23:32.002102 kernel: TSC deadline timer available Jan 17 12:23:32.002108 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 17 12:23:32.002113 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 17 12:23:32.002119 kernel: Booting paravirtualized kernel on bare hardware Jan 17 12:23:32.002124 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:23:32.002130 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 12:23:32.002135 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 12:23:32.002140 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 12:23:32.002146 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 12:23:32.002153 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:32.002158 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:23:32.002164 kernel: random: crng init done Jan 17 12:23:32.002169 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 17 12:23:32.002174 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 17 12:23:32.002180 kernel: Fallback order for Node 0: 0 Jan 17 12:23:32.002185 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 17 12:23:32.002192 kernel: Policy zone: Normal Jan 17 12:23:32.002197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:23:32.002203 kernel: software IO TLB: area num 16. Jan 17 12:23:32.002208 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 732416K reserved, 0K cma-reserved) Jan 17 12:23:32.002214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 12:23:32.002219 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:23:32.002225 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:23:32.002230 kernel: Dynamic Preempt: voluntary Jan 17 12:23:32.002235 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:23:32.002244 kernel: rcu: RCU event tracing is enabled. Jan 17 12:23:32.002249 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 12:23:32.002255 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:23:32.002260 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:23:32.002266 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:23:32.002271 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:23:32.002277 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 12:23:32.002282 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 17 12:23:32.002287 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:23:32.002293 kernel: Console: colour dummy device 80x25 Jan 17 12:23:32.002299 kernel: printk: console [tty0] enabled Jan 17 12:23:32.002304 kernel: printk: console [ttyS1] enabled Jan 17 12:23:32.002310 kernel: ACPI: Core revision 20230628 Jan 17 12:23:32.002315 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 17 12:23:32.002321 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:23:32.002326 kernel: DMAR: Host address width 39 Jan 17 12:23:32.002332 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 17 12:23:32.002337 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 17 12:23:32.002342 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 17 12:23:32.002349 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 17 12:23:32.002354 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 17 12:23:32.002359 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 17 12:23:32.002365 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 17 12:23:32.002370 kernel: x2apic enabled Jan 17 12:23:32.002375 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 17 12:23:32.002381 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 17 12:23:32.002386 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 17 12:23:32.002392 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 17 12:23:32.002398 kernel: process: using mwait in idle threads Jan 17 12:23:32.002404 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:23:32.002409 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:23:32.002414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:23:32.002420 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:23:32.002425 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:23:32.002430 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 17 12:23:32.002436 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:23:32.002441 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 17 12:23:32.002446 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 17 12:23:32.002452 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:23:32.002458 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:23:32.002463 kernel: TAA: Mitigation: TSX disabled Jan 17 12:23:32.002469 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 17 12:23:32.002474 kernel: SRBDS: Mitigation: Microcode Jan 17 12:23:32.002480 kernel: GDS: Mitigation: Microcode Jan 17 12:23:32.002485 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:23:32.002490 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:23:32.002496 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:23:32.002501 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 12:23:32.002506 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 12:23:32.002512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:23:32.002518 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 12:23:32.002523 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 12:23:32.002529 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 17 12:23:32.002534 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:23:32.002540 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:23:32.002545 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:23:32.002550 kernel: landlock: Up and running. Jan 17 12:23:32.002556 kernel: SELinux: Initializing. Jan 17 12:23:32.002561 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.002566 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.002572 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 17 12:23:32.002578 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:23:32.002584 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:23:32.002589 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 12:23:32.002595 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 17 12:23:32.002600 kernel: ... version: 4 Jan 17 12:23:32.002605 kernel: ... bit width: 48 Jan 17 12:23:32.002611 kernel: ... generic registers: 4 Jan 17 12:23:32.002616 kernel: ... value mask: 0000ffffffffffff Jan 17 12:23:32.002621 kernel: ... max period: 00007fffffffffff Jan 17 12:23:32.002628 kernel: ... fixed-purpose events: 3 Jan 17 12:23:32.002633 kernel: ... event mask: 000000070000000f Jan 17 12:23:32.002638 kernel: signal: max sigframe size: 2032 Jan 17 12:23:32.002644 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 17 12:23:32.002649 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:23:32.002655 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:23:32.002660 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 17 12:23:32.002665 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:23:32.002671 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:23:32.002677 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 17 12:23:32.002683 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:23:32.002689 kernel: smp: Brought up 1 node, 16 CPUs Jan 17 12:23:32.002694 kernel: smpboot: Max logical packages: 1 Jan 17 12:23:32.002699 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 17 12:23:32.002705 kernel: devtmpfs: initialized Jan 17 12:23:32.002710 kernel: x86/mm: Memory block size: 128MB Jan 17 12:23:32.002715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cd000-0x819cdfff] (4096 bytes) Jan 17 12:23:32.002759 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 17 12:23:32.002766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:23:32.002785 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 12:23:32.002791 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:23:32.002796 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:23:32.002801 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:23:32.002807 kernel: audit: type=2000 audit(1737116606.039:1): state=initialized audit_enabled=0 res=1 Jan 17 12:23:32.002812 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:23:32.002817 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:23:32.002823 kernel: cpuidle: using governor menu Jan 17 12:23:32.002829 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:23:32.002834 kernel: dca service started, version 1.12.1 Jan 17 12:23:32.002840 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 17 12:23:32.002845 kernel: PCI: Using configuration type 1 for base access Jan 17 12:23:32.002850 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 17 12:23:32.002856 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:23:32.002861 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:23:32.002866 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:23:32.002873 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:23:32.002878 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:23:32.002883 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:23:32.002889 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:23:32.002894 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:23:32.002899 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:23:32.002905 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 17 12:23:32.002910 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002916 kernel: ACPI: SSDT 0xFFFF9FDB01601C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 17 12:23:32.002921 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002927 kernel: ACPI: SSDT 0xFFFF9FDB015F8000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 17 12:23:32.002932 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002938 kernel: ACPI: SSDT 0xFFFF9FDB015E5400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 17 12:23:32.002943 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002948 kernel: ACPI: SSDT 0xFFFF9FDB015FD000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 17 12:23:32.002954 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002959 kernel: ACPI: SSDT 0xFFFF9FDB0160E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 17 12:23:32.002964 kernel: ACPI: Dynamic OEM Table Load: Jan 17 12:23:32.002969 kernel: ACPI: SSDT 0xFFFF9FDB00EEB400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 17 12:23:32.002976 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 17 12:23:32.002981 kernel: ACPI: Interpreter enabled Jan 17 12:23:32.002986 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:23:32.002992 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:23:32.002997 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 17 12:23:32.003002 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 17 12:23:32.003008 kernel: HEST: Table parsing has been initialized. Jan 17 12:23:32.003013 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 17 12:23:32.003019 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:23:32.003025 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:23:32.003030 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 17 12:23:32.003036 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 17 12:23:32.003041 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 17 12:23:32.003047 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 17 12:23:32.003052 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 17 12:23:32.003057 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 17 12:23:32.003063 kernel: ACPI: \_TZ_.FN00: New power resource Jan 17 12:23:32.003068 kernel: ACPI: \_TZ_.FN01: New power resource Jan 17 12:23:32.003074 kernel: ACPI: \_TZ_.FN02: New power resource Jan 17 12:23:32.003080 kernel: ACPI: \_TZ_.FN03: New power resource Jan 17 12:23:32.003085 kernel: ACPI: \_TZ_.FN04: New power resource Jan 17 12:23:32.003091 kernel: ACPI: \PIN_: New power resource Jan 17 12:23:32.003096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 17 12:23:32.003170 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:23:32.003223 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 17 12:23:32.003271 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 17 12:23:32.003280 kernel: PCI host bridge to bus 0000:00 Jan 17 12:23:32.003332 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:23:32.003376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:23:32.003419 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:23:32.003460 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 17 12:23:32.003503 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 17 12:23:32.003544 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 17 12:23:32.003605 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 17 12:23:32.003662 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 17 12:23:32.003712 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.003800 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 17 12:23:32.003848 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 17 12:23:32.003899 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 17 12:23:32.003951 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 17 12:23:32.004003 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 17 12:23:32.004051 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 17 12:23:32.004098 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 17 12:23:32.004150 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 17 12:23:32.004197 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 17 12:23:32.004247 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 17 12:23:32.004300 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 17 12:23:32.004349 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 17 12:23:32.004403 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 17 12:23:32.004451 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 17 12:23:32.004503 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 17 12:23:32.004552 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 17 12:23:32.004603 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 17 12:23:32.004660 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 17 12:23:32.004712 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 17 12:23:32.004800 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 17 12:23:32.004853 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 17 12:23:32.004904 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 17 12:23:32.004954 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 17 12:23:32.005005 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 17 12:23:32.005054 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 17 12:23:32.005100 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 17 12:23:32.005147 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 17 12:23:32.005195 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 17 12:23:32.005242 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 17 12:23:32.005293 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 17 12:23:32.005341 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 17 12:23:32.005395 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 17 12:23:32.005443 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005502 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 17 12:23:32.005550 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005603 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 17 12:23:32.005651 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005704 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 17 12:23:32.005791 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005844 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 17 12:23:32.005892 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.005944 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 17 12:23:32.005991 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 17 12:23:32.006045 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 17 12:23:32.006101 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 17 12:23:32.006151 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 17 12:23:32.006200 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 17 12:23:32.006252 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 17 12:23:32.006300 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 17 12:23:32.006356 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 17 12:23:32.006406 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 17 12:23:32.006457 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 17 12:23:32.006508 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 17 12:23:32.006557 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 17 12:23:32.006607 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 17 12:23:32.006662 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 17 12:23:32.006711 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 17 12:23:32.006798 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 17 12:23:32.006848 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 17 12:23:32.006898 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 17 12:23:32.006946 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 17 12:23:32.006996 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:23:32.007044 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 17 12:23:32.007093 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 17 12:23:32.007142 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 17 12:23:32.007194 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 17 12:23:32.007248 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 17 12:23:32.007297 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 17 12:23:32.007347 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 17 12:23:32.007395 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 17 12:23:32.007444 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.007492 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 17 12:23:32.007540 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 17 12:23:32.007591 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 17 12:23:32.007645 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 17 12:23:32.007695 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 17 12:23:32.007767 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 17 12:23:32.007831 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 17 12:23:32.007880 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 17 12:23:32.007930 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 17 12:23:32.007981 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 17 12:23:32.008030 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 17 12:23:32.008077 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 17 12:23:32.008126 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 17 12:23:32.008181 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 17 12:23:32.008230 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 17 12:23:32.008279 kernel: pci 0000:06:00.0: supports D1 D2 Jan 17 12:23:32.008327 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:23:32.008380 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 17 12:23:32.008427 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.008475 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.008530 kernel: pci_bus 0000:07: extended config space not accessible Jan 17 12:23:32.008588 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 17 12:23:32.008641 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 17 12:23:32.008691 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 17 12:23:32.008770 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 17 12:23:32.008838 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:23:32.008890 kernel: pci 0000:07:00.0: supports D1 D2 Jan 17 12:23:32.008943 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:23:32.008991 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 17 12:23:32.009041 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.009089 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.009100 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 17 12:23:32.009106 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 17 12:23:32.009112 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 17 12:23:32.009117 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 17 12:23:32.009123 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 17 12:23:32.009129 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 17 12:23:32.009134 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 17 12:23:32.009140 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 17 12:23:32.009146 kernel: iommu: Default domain type: Translated Jan 17 12:23:32.009152 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:23:32.009158 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:23:32.009164 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:23:32.009170 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 17 12:23:32.009175 kernel: e820: reserve RAM buffer [mem 0x819cd000-0x83ffffff] Jan 17 12:23:32.009181 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 17 12:23:32.009187 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 17 12:23:32.009192 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 17 12:23:32.009198 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 17 12:23:32.009249 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 17 12:23:32.009299 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 17 12:23:32.009350 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:23:32.009358 kernel: vgaarb: loaded Jan 17 12:23:32.009364 kernel: clocksource: Switched to clocksource tsc-early Jan 17 12:23:32.009370 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:23:32.009376 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:23:32.009382 kernel: pnp: PnP ACPI init Jan 17 12:23:32.009432 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 17 12:23:32.009482 kernel: pnp 00:02: [dma 0 disabled] Jan 17 12:23:32.009530 kernel: pnp 00:03: [dma 0 disabled] Jan 17 12:23:32.009579 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 17 12:23:32.009624 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 17 12:23:32.009672 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 17 12:23:32.009722 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 17 12:23:32.009812 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 17 12:23:32.009856 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 17 12:23:32.009899 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 17 12:23:32.009946 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 17 12:23:32.009989 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 17 12:23:32.010034 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 17 12:23:32.010077 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 17 12:23:32.010128 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 17 12:23:32.010173 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 17 12:23:32.010216 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 17 12:23:32.010260 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 17 12:23:32.010302 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 17 12:23:32.010346 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 17 12:23:32.010391 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 17 12:23:32.010438 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 17 12:23:32.010447 kernel: pnp: PnP ACPI: found 10 devices Jan 17 12:23:32.010453 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:23:32.010459 kernel: NET: Registered PF_INET protocol family Jan 17 12:23:32.010465 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:23:32.010471 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 17 12:23:32.010478 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:23:32.010484 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:23:32.010491 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:23:32.010497 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 17 12:23:32.010502 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.010508 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:23:32.010514 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:23:32.010520 kernel: NET: Registered PF_XDP protocol family Jan 17 12:23:32.010567 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 17 12:23:32.010617 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 17 12:23:32.010667 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 17 12:23:32.010720 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010814 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010865 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010915 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 17 12:23:32.010963 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 17 12:23:32.011012 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 17 12:23:32.011059 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 17 12:23:32.011110 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 17 12:23:32.011158 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 17 12:23:32.011207 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 17 12:23:32.011255 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 17 12:23:32.011303 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 17 12:23:32.011353 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 17 12:23:32.011401 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 17 12:23:32.011448 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 17 12:23:32.011498 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 17 12:23:32.011548 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.011597 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.011645 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 17 12:23:32.011692 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 17 12:23:32.011768 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 17 12:23:32.011835 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 17 12:23:32.011878 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:23:32.011921 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:23:32.011964 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:23:32.012006 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 17 12:23:32.012049 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 17 12:23:32.012098 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 17 12:23:32.012145 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 17 12:23:32.012195 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 17 12:23:32.012240 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 17 12:23:32.012288 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 17 12:23:32.012333 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 17 12:23:32.012382 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 17 12:23:32.012429 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 17 12:23:32.012476 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 17 12:23:32.012521 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 17 12:23:32.012529 kernel: PCI: CLS 64 bytes, default 64 Jan 17 12:23:32.012535 kernel: DMAR: No ATSR found Jan 17 12:23:32.012541 kernel: DMAR: No SATC found Jan 17 12:23:32.012547 kernel: DMAR: dmar0: Using Queued invalidation Jan 17 12:23:32.012595 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 17 12:23:32.012645 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 17 12:23:32.012694 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 17 12:23:32.012770 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 17 12:23:32.012839 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 17 12:23:32.012887 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 17 12:23:32.012935 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 17 12:23:32.012981 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 17 12:23:32.013030 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 17 12:23:32.013076 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 17 12:23:32.013127 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 17 12:23:32.013174 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 17 12:23:32.013223 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 17 12:23:32.013271 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 17 12:23:32.013318 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 17 12:23:32.013367 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 17 12:23:32.013415 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 17 12:23:32.013462 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 17 12:23:32.013512 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 17 12:23:32.013561 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 17 12:23:32.013608 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 17 12:23:32.013658 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 17 12:23:32.013708 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 17 12:23:32.013807 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 17 12:23:32.013856 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 17 12:23:32.013906 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 17 12:23:32.013961 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 17 12:23:32.013969 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 17 12:23:32.013975 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:23:32.013981 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 17 12:23:32.013987 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 17 12:23:32.013993 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 17 12:23:32.013999 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 17 12:23:32.014004 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 17 12:23:32.014053 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 17 12:23:32.014064 kernel: Initialise system trusted keyrings Jan 17 12:23:32.014070 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 17 12:23:32.014075 kernel: Key type asymmetric registered Jan 17 12:23:32.014081 kernel: Asymmetric key parser 'x509' registered Jan 17 12:23:32.014087 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:23:32.014092 kernel: io scheduler mq-deadline registered Jan 17 12:23:32.014098 kernel: io scheduler kyber registered Jan 17 12:23:32.014104 kernel: io scheduler bfq registered Jan 17 12:23:32.014153 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 17 12:23:32.014201 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 17 12:23:32.014251 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 17 12:23:32.014299 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 17 12:23:32.014347 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 17 12:23:32.014395 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 17 12:23:32.014452 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 17 12:23:32.014463 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 17 12:23:32.014469 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 17 12:23:32.014475 kernel: pstore: Using crash dump compression: deflate Jan 17 12:23:32.014481 kernel: pstore: Registered erst as persistent store backend Jan 17 12:23:32.014486 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:23:32.014492 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:23:32.014498 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:23:32.014504 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:23:32.014510 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 17 12:23:32.014560 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 17 12:23:32.014569 kernel: i8042: PNP: No PS/2 controller found. Jan 17 12:23:32.014613 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 17 12:23:32.014658 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 17 12:23:32.014702 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-17T12:23:30 UTC (1737116610) Jan 17 12:23:32.014795 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 17 12:23:32.014804 kernel: intel_pstate: Intel P-state driver initializing Jan 17 12:23:32.014812 kernel: intel_pstate: Disabling energy efficiency optimization Jan 17 12:23:32.014817 kernel: intel_pstate: HWP enabled Jan 17 12:23:32.014823 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jan 17 12:23:32.014829 kernel: vesafb: scrolling: redraw Jan 17 12:23:32.014835 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jan 17 12:23:32.014840 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000c9cc047d, using 768k, total 768k Jan 17 12:23:32.014846 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:23:32.014852 kernel: fb0: VESA VGA frame buffer device Jan 17 12:23:32.014858 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:23:32.014863 kernel: Segment Routing with IPv6 Jan 17 12:23:32.014870 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:23:32.014876 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:23:32.014881 kernel: Key type dns_resolver registered Jan 17 12:23:32.014887 kernel: microcode: Microcode Update Driver: v2.2. Jan 17 12:23:32.014893 kernel: IPI shorthand broadcast: enabled Jan 17 12:23:32.014899 kernel: sched_clock: Marking stable (2475082649, 1384705464)->(4403413811, -543625698) Jan 17 12:23:32.014904 kernel: registered taskstats version 1 Jan 17 12:23:32.014910 kernel: Loading compiled-in X.509 certificates Jan 17 12:23:32.014916 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:23:32.014923 kernel: Key type .fscrypt registered Jan 17 12:23:32.014928 kernel: Key type fscrypt-provisioning registered Jan 17 12:23:32.014934 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:23:32.014939 kernel: ima: No architecture policies found Jan 17 12:23:32.014945 kernel: clk: Disabling unused clocks Jan 17 12:23:32.014951 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:23:32.014957 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:23:32.014962 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:23:32.014969 kernel: Run /init as init process Jan 17 12:23:32.014975 kernel: with arguments: Jan 17 12:23:32.014981 kernel: /init Jan 17 12:23:32.014986 kernel: with environment: Jan 17 12:23:32.014992 kernel: HOME=/ Jan 17 12:23:32.014997 kernel: TERM=linux Jan 17 12:23:32.015003 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:23:32.015010 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:23:32.015019 systemd[1]: Detected architecture x86-64. Jan 17 12:23:32.015025 systemd[1]: Running in initrd. Jan 17 12:23:32.015031 systemd[1]: No hostname configured, using default hostname. Jan 17 12:23:32.015037 systemd[1]: Hostname set to <localhost>. Jan 17 12:23:32.015043 systemd[1]: Initializing machine ID from random generator. Jan 17 12:23:32.015049 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:23:32.015055 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:32.015061 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:32.015068 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:23:32.015075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:23:32.015081 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:23:32.015087 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:23:32.015093 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:23:32.015100 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:23:32.015106 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jan 17 12:23:32.015112 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jan 17 12:23:32.015118 kernel: clocksource: Switched to clocksource tsc Jan 17 12:23:32.015124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:32.015130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:32.015137 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:23:32.015143 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:23:32.015149 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:23:32.015155 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:23:32.015161 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:32.015168 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:32.015174 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:23:32.015180 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:23:32.015186 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:32.015192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:32.015198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:32.015204 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:23:32.015210 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:23:32.015217 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:23:32.015223 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:23:32.015229 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:23:32.015235 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:23:32.015252 systemd-journald[268]: Collecting audit messages is disabled. Jan 17 12:23:32.015267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:23:32.015274 systemd-journald[268]: Journal started Jan 17 12:23:32.015287 systemd-journald[268]: Runtime Journal (/run/log/journal/d5e271653ac248368cca908aa695e3d2) is 8.0M, max 639.9M, 631.9M free. Jan 17 12:23:32.049285 systemd-modules-load[270]: Inserted module 'overlay' Jan 17 12:23:32.058918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:32.079741 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:23:32.079713 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:32.151958 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:23:32.151971 kernel: Bridge firewalling registered Jan 17 12:23:32.136893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:32.140975 systemd-modules-load[270]: Inserted module 'br_netfilter' Jan 17 12:23:32.164027 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:23:32.185109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:32.193119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:32.221962 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:32.227073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:23:32.250138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:23:32.279399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:23:32.284936 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:23:32.285574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:23:32.300135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:32.333161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:32.344291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:32.365856 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:32.404986 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:23:32.415533 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:23:32.437570 systemd-resolved[324]: Positive Trust Anchors: Jan 17 12:23:32.460796 dracut-cmdline[307]: dracut-dracut-053 Jan 17 12:23:32.460796 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:32.549253 kernel: SCSI subsystem initialized Jan 17 12:23:32.549271 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:23:32.549282 kernel: iscsi: registered transport (tcp) Jan 17 12:23:32.549290 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:23:32.437577 systemd-resolved[324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:23:32.586985 kernel: QLogic iSCSI HBA Driver Jan 17 12:23:32.437610 systemd-resolved[324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:23:32.439782 systemd-resolved[324]: Defaulting to hostname 'linux'. Jan 17 12:23:32.440501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:23:32.453897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:32.585633 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:32.610822 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:23:32.767735 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:23:32.767781 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:23:32.787692 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:23:32.845783 kernel: raid6: avx2x4 gen() 52949 MB/s Jan 17 12:23:32.877751 kernel: raid6: avx2x2 gen() 53610 MB/s Jan 17 12:23:32.914210 kernel: raid6: avx2x1 gen() 45029 MB/s Jan 17 12:23:32.914226 kernel: raid6: using algorithm avx2x2 gen() 53610 MB/s Jan 17 12:23:32.961251 kernel: raid6: .... xor() 31423 MB/s, rmw enabled Jan 17 12:23:32.961269 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:23:33.002749 kernel: xor: automatically using best checksumming function avx Jan 17 12:23:33.114757 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:23:33.120174 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:33.148073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:33.154677 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jan 17 12:23:33.158912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:33.195889 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:23:33.239685 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jan 17 12:23:33.256526 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:33.268979 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:23:33.335586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:33.368315 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:23:33.368394 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Jan 17 12:23:33.379797 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:23:33.411724 kernel: ACPI: bus type USB registered Jan 17 12:23:33.411744 kernel: usbcore: registered new interface driver usbfs Jan 17 12:23:33.432112 kernel: usbcore: registered new interface driver hub Jan 17 12:23:33.434004 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:23:33.472830 kernel: usbcore: registered new device driver usb Jan 17 12:23:33.472845 kernel: PTP clock support registered Jan 17 12:23:33.472853 kernel: libata version 3.00 loaded. Jan 17 12:23:33.470035 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:33.524827 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:23:33.524864 kernel: AES CTR mode by8 optimization enabled Jan 17 12:23:33.524894 kernel: ahci 0000:00:17.0: version 3.0 Jan 17 12:23:34.161209 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 17 12:23:34.161223 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 17 12:23:34.161298 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 17 12:23:34.161306 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 17 12:23:34.161370 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 17 12:23:34.161431 kernel: pps pps0: new PPS source ptp0 Jan 17 12:23:34.161497 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 17 12:23:34.161560 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 17 12:23:34.161625 kernel: scsi host0: ahci Jan 17 12:23:34.161686 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 17 12:23:34.161754 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 17 12:23:34.161815 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 17 12:23:34.161874 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 17 12:23:34.161933 kernel: hub 1-0:1.0: USB hub found Jan 17 12:23:34.162005 kernel: hub 1-0:1.0: 16 ports detected Jan 17 12:23:34.162071 kernel: hub 2-0:1.0: USB hub found Jan 17 12:23:34.162139 kernel: hub 2-0:1.0: 10 ports detected Jan 17 12:23:34.162203 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 17 12:23:34.162265 kernel: scsi host1: ahci Jan 17 12:23:34.162327 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:54 Jan 17 12:23:34.162391 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 17 12:23:34.162453 kernel: scsi host2: ahci Jan 17 12:23:34.162510 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 17 12:23:34.162571 kernel: scsi host3: ahci Jan 17 12:23:34.162630 kernel: pps pps1: new PPS source ptp1 Jan 17 12:23:34.162686 kernel: scsi host4: ahci Jan 17 12:23:34.162750 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 17 12:23:34.162818 kernel: scsi host5: ahci Jan 17 12:23:34.162879 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 17 12:23:34.162940 kernel: scsi host6: ahci Jan 17 12:23:34.162999 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:55 Jan 17 12:23:34.163061 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 17 12:23:34.163070 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 17 12:23:34.163130 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 17 12:23:34.163140 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 17 12:23:34.163201 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 17 12:23:34.163209 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 17 12:23:34.163303 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 17 12:23:34.163312 kernel: hub 1-14:1.0: USB hub found Jan 17 12:23:34.163386 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 17 12:23:34.163394 kernel: hub 1-14:1.0: 4 ports detected Jan 17 12:23:34.163461 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 17 12:23:34.163469 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 17 12:23:33.509870 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:33.654487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:34.228893 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jan 17 12:23:34.762351 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 17 12:23:34.762433 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 17 12:23:34.762543 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:23:34.762553 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762560 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762568 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 17 12:23:34.762638 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 17 12:23:34.762648 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 17 12:23:34.762712 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762726 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762734 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 17 12:23:34.762741 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:34.762749 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 17 12:23:34.762756 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 17 12:23:34.762763 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 17 12:23:34.762773 kernel: ata2.00: Features: NCQ-prio Jan 17 12:23:34.762780 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 17 12:23:34.762787 kernel: ata1.00: Features: NCQ-prio Jan 17 12:23:34.762795 kernel: ata2.00: configured for UDMA/133 Jan 17 12:23:34.762802 kernel: ata1.00: configured for UDMA/133 Jan 17 12:23:34.762809 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 12:23:34.762874 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 17 12:23:34.213852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:23:34.895818 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jan 17 12:23:35.421861 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 17 12:23:35.421963 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 17 12:23:35.422081 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 17 12:23:35.422180 kernel: usbcore: registered new interface driver usbhid Jan 17 12:23:35.422193 kernel: usbhid: USB HID core driver Jan 17 12:23:35.422200 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 17 12:23:35.422265 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 17 12:23:35.422273 kernel: ata2.00: Enabling discard_zeroes_data Jan 17 12:23:35.422280 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.422287 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 17 12:23:35.422346 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 17 12:23:35.422409 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 17 12:23:35.422469 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:23:35.422529 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 17 12:23:35.422588 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:23:35.422682 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 17 12:23:35.422770 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:23:35.422829 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 17 12:23:35.422887 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:23:35.422947 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 17 12:23:35.423022 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 17 12:23:35.423030 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 17 12:23:35.423096 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 17 12:23:35.423155 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 17 12:23:35.423214 kernel: ata2.00: Enabling discard_zeroes_data Jan 17 12:23:35.423222 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 17 12:23:35.423352 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.423360 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 17 12:23:35.423422 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 17 12:23:35.423481 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:23:35.423489 kernel: GPT:9289727 != 937703087 Jan 17 12:23:35.423496 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:23:35.423503 kernel: GPT:9289727 != 937703087 Jan 17 12:23:35.423510 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:23:35.423517 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:35.423526 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:23:35.423585 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 12:23:35.423647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (547) Jan 17 12:23:34.228818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:35.551829 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Jan 17 12:23:35.551918 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (559) Jan 17 12:23:35.551930 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Jan 17 12:23:34.228856 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:34.247879 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:34.265858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:23:34.275800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:34.275829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:34.292530 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:34.319818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:34.329892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:35.721814 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.721829 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:34.340103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:35.739853 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:34.364835 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:35.761849 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:34.373939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:35.780829 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:35.780841 disk-uuid[731]: Primary Header is updated. Jan 17 12:23:35.780841 disk-uuid[731]: Secondary Entries is updated. Jan 17 12:23:35.780841 disk-uuid[731]: Secondary Header is updated. Jan 17 12:23:35.817824 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:35.425484 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 17 12:23:35.578624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 17 12:23:35.603534 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 17 12:23:35.625886 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 17 12:23:35.642816 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 17 12:23:35.677890 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:23:36.781092 kernel: ata1.00: Enabling discard_zeroes_data Jan 17 12:23:36.799747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:23:36.799795 disk-uuid[732]: The operation has completed successfully. Jan 17 12:23:36.839051 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:23:36.839114 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:23:36.867839 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:23:36.902604 sh[749]: Success Jan 17 12:23:36.931770 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:23:36.975366 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:23:37.000983 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:23:37.002450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:23:37.070173 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:23:37.070194 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:37.090658 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:23:37.109056 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:23:37.126538 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:23:37.162764 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:23:37.163430 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:23:37.163776 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:23:37.177983 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:23:37.179359 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:23:37.253845 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:37.253864 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:37.271647 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:23:37.294351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:37.323373 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:23:37.323389 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:23:37.346724 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:37.353285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:23:37.367847 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:23:37.370883 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:23:37.401463 systemd-networkd[933]: lo: Link UP Jan 17 12:23:37.401465 systemd-networkd[933]: lo: Gained carrier Jan 17 12:23:37.403863 systemd-networkd[933]: Enumeration completed Jan 17 12:23:37.403938 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:23:37.404680 systemd-networkd[933]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.449534 ignition[931]: Ignition 2.19.0 Jan 17 12:23:37.413907 systemd[1]: Reached target network.target - Network. Jan 17 12:23:37.449539 ignition[931]: Stage: fetch-offline Jan 17 12:23:37.431274 systemd-networkd[933]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.449559 ignition[931]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:37.451668 unknown[931]: fetched base config from "system" Jan 17 12:23:37.449564 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:37.451672 unknown[931]: fetched user config from "system" Jan 17 12:23:37.449619 ignition[931]: parsed url from cmdline: "" Jan 17 12:23:37.452541 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:37.449621 ignition[931]: no config URL provided Jan 17 12:23:37.458056 systemd-networkd[933]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.449623 ignition[931]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:23:37.466160 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:23:37.449646 ignition[931]: parsing config with SHA512: a189ccd0faf595830c434bbf87d0352769c649c898bd7c1236afc5bf91acdccb4aee2f79620228f4e8b99ed099ca8aa1ae210b498015bfba5bc397b5de30f532 Jan 17 12:23:37.478854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:23:37.451894 ignition[931]: fetch-offline: fetch-offline passed Jan 17 12:23:37.451897 ignition[931]: POST message to Packet Timeline Jan 17 12:23:37.451899 ignition[931]: POST Status error: resource requires networking Jan 17 12:23:37.451934 ignition[931]: Ignition finished successfully Jan 17 12:23:37.488251 ignition[944]: Ignition 2.19.0 Jan 17 12:23:37.488268 ignition[944]: Stage: kargs Jan 17 12:23:37.488454 ignition[944]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:37.488463 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:37.690828 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 17 12:23:37.682328 systemd-networkd[933]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:37.489218 ignition[944]: kargs: kargs passed Jan 17 12:23:37.489222 ignition[944]: POST message to Packet Timeline Jan 17 12:23:37.489233 ignition[944]: GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:37.489780 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40024->[::1]:53: read: connection refused Jan 17 12:23:37.690157 ignition[944]: GET https://metadata.packet.net/metadata: attempt #2 Jan 17 12:23:37.690864 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55442->[::1]:53: read: connection refused Jan 17 12:23:37.964756 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 17 12:23:37.966165 systemd-networkd[933]: eno1: Link UP Jan 17 12:23:37.966316 systemd-networkd[933]: eno2: Link UP Jan 17 12:23:37.966440 systemd-networkd[933]: enp1s0f0np0: Link UP Jan 17 12:23:37.966604 systemd-networkd[933]: enp1s0f0np0: Gained carrier Jan 17 12:23:37.976881 systemd-networkd[933]: enp1s0f1np1: Link UP Jan 17 12:23:38.009881 systemd-networkd[933]: enp1s0f0np0: DHCPv4 address 147.75.90.1/31, gateway 147.75.90.0 acquired from 145.40.83.140 Jan 17 12:23:38.091812 ignition[944]: GET https://metadata.packet.net/metadata: attempt #3 Jan 17 12:23:38.093033 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36433->[::1]:53: read: connection refused Jan 17 12:23:38.690517 systemd-networkd[933]: enp1s0f1np1: Gained carrier Jan 17 12:23:38.893374 ignition[944]: GET https://metadata.packet.net/metadata: attempt #4 Jan 17 12:23:38.894602 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59199->[::1]:53: read: connection refused Jan 17 12:23:39.074325 systemd-networkd[933]: enp1s0f0np0: Gained IPv6LL Jan 17 12:23:40.098326 systemd-networkd[933]: enp1s0f1np1: Gained IPv6LL Jan 17 12:23:40.496076 ignition[944]: GET https://metadata.packet.net/metadata: attempt #5 Jan 17 12:23:40.497248 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52231->[::1]:53: read: connection refused Jan 17 12:23:43.699866 ignition[944]: GET https://metadata.packet.net/metadata: attempt #6 Jan 17 12:23:45.125080 ignition[944]: GET result: OK Jan 17 12:23:45.496557 ignition[944]: Ignition finished successfully Jan 17 12:23:45.501286 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:23:45.527018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:23:45.533012 ignition[964]: Ignition 2.19.0 Jan 17 12:23:45.533016 ignition[964]: Stage: disks Jan 17 12:23:45.533117 ignition[964]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:45.533123 ignition[964]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:45.533605 ignition[964]: disks: disks passed Jan 17 12:23:45.533607 ignition[964]: POST message to Packet Timeline Jan 17 12:23:45.533615 ignition[964]: GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:46.591930 ignition[964]: GET result: OK Jan 17 12:23:46.920777 ignition[964]: Ignition finished successfully Jan 17 12:23:46.923069 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:23:46.940043 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:46.959003 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:23:46.981038 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:47.002133 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:23:47.022027 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:23:47.052000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:23:47.088013 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:23:47.099157 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:23:47.121016 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:23:47.217778 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:23:47.217755 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:23:47.227231 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:23:47.259946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:47.380990 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (993) Jan 17 12:23:47.381003 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:47.381015 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:47.381022 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:23:47.381104 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:23:47.381111 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:23:47.288509 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:23:47.399245 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:23:47.411328 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 17 12:23:47.422001 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:23:47.422030 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:47.440156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:47.503986 coreos-metadata[1010]: Jan 17 12:23:47.494 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 17 12:23:47.477856 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:23:47.546849 coreos-metadata[1011]: Jan 17 12:23:47.494 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 17 12:23:47.517955 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:23:47.571913 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:23:47.582842 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:23:47.592834 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:23:47.604004 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:23:47.602112 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:47.630982 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:23:47.635602 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:23:47.676850 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:47.669581 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:23:47.685884 ignition[1113]: INFO : Ignition 2.19.0 Jan 17 12:23:47.685884 ignition[1113]: INFO : Stage: mount Jan 17 12:23:47.685884 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:47.685884 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:47.685884 ignition[1113]: INFO : mount: mount passed Jan 17 12:23:47.685884 ignition[1113]: INFO : POST message to Packet Timeline Jan 17 12:23:47.685884 ignition[1113]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:47.687624 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:23:48.438075 ignition[1113]: INFO : GET result: OK Jan 17 12:23:48.514841 coreos-metadata[1010]: Jan 17 12:23:48.514 INFO Fetch successful Jan 17 12:23:48.552693 coreos-metadata[1010]: Jan 17 12:23:48.552 INFO wrote hostname ci-4081.3.0-a-4c6521d577 to /sysroot/etc/hostname Jan 17 12:23:48.554198 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:23:48.887056 coreos-metadata[1011]: Jan 17 12:23:48.886 INFO Fetch successful Jan 17 12:23:48.965062 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 17 12:23:48.965124 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 17 12:23:49.485592 ignition[1113]: INFO : Ignition finished successfully Jan 17 12:23:49.488680 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:23:49.518949 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:23:49.529878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:49.581781 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1139) Jan 17 12:23:49.581812 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:49.609771 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:49.626596 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:23:49.663120 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:23:49.663142 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:23:49.675742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:49.703293 ignition[1156]: INFO : Ignition 2.19.0 Jan 17 12:23:49.703293 ignition[1156]: INFO : Stage: files Jan 17 12:23:49.716968 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:49.716968 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:49.716968 ignition[1156]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:23:49.716968 ignition[1156]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:23:49.716968 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:23:49.707596 unknown[1156]: wrote ssh authorized keys file for user: core Jan 17 12:23:49.850798 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:23:49.873623 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:49.890943 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 17 12:23:50.352385 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:23:50.562965 ignition[1156]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:50.562965 ignition[1156]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:50.591962 ignition[1156]: INFO : files: files passed Jan 17 12:23:50.591962 ignition[1156]: INFO : POST message to Packet Timeline Jan 17 12:23:50.591962 ignition[1156]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:51.733102 ignition[1156]: INFO : GET result: OK Jan 17 12:23:52.328618 ignition[1156]: INFO : Ignition finished successfully Jan 17 12:23:52.331667 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:23:52.361289 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:23:52.372795 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:23:52.398680 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:23:52.398939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:23:52.447335 initrd-setup-root-after-ignition[1195]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.447335 initrd-setup-root-after-ignition[1195]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.486001 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.451962 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:52.463016 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:23:52.511227 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:23:52.599871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:23:52.600158 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:23:52.621402 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:23:52.641975 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:23:52.663220 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:23:52.677152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:23:52.748371 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:52.775155 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:23:52.803878 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:52.815255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:52.836453 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:23:52.855311 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:23:52.855712 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:52.894149 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:23:52.904342 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:23:52.923349 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:23:52.941345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:52.962345 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:52.983350 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:23:53.003331 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:53.024366 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:23:53.045354 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:23:53.065276 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:23:53.083229 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:23:53.083631 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:53.120200 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:53.130351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:53.151206 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:23:53.151655 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:53.175221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:23:53.175617 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:53.207303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:23:53.207780 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:53.228539 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:23:53.246206 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:23:53.246626 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:53.268443 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:23:53.286333 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:23:53.304302 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:23:53.304609 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:53.324361 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:23:53.324661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:53.347405 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:23:53.347830 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:53.367421 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:23:53.483953 ignition[1219]: INFO : Ignition 2.19.0 Jan 17 12:23:53.483953 ignition[1219]: INFO : Stage: umount Jan 17 12:23:53.483953 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:53.483953 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 17 12:23:53.483953 ignition[1219]: INFO : umount: umount passed Jan 17 12:23:53.483953 ignition[1219]: INFO : POST message to Packet Timeline Jan 17 12:23:53.483953 ignition[1219]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 17 12:23:53.367820 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:23:53.385397 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:23:53.385814 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:23:53.417999 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:23:53.435973 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:23:53.436418 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:53.463956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:23:53.475801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:23:53.475874 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:53.496010 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:23:53.496090 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:53.546470 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:23:53.548334 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:23:53.548597 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:23:53.559749 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:23:53.560006 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:23:54.817408 ignition[1219]: INFO : GET result: OK Jan 17 12:23:55.356264 ignition[1219]: INFO : Ignition finished successfully Jan 17 12:23:55.359456 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:23:55.359769 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:23:55.375949 systemd[1]: Stopped target network.target - Network. Jan 17 12:23:55.390951 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:23:55.391123 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:23:55.409094 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:23:55.409257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:23:55.427128 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:23:55.427281 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:23:55.446219 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:23:55.446378 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:55.465102 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:23:55.465267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:55.484496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:23:55.493877 systemd-networkd[933]: enp1s0f0np0: DHCPv6 lease lost Jan 17 12:23:55.502195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:23:55.505905 systemd-networkd[933]: enp1s0f1np1: DHCPv6 lease lost Jan 17 12:23:55.520766 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:23:55.521046 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:23:55.540022 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:23:55.540369 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:23:55.561374 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:23:55.561492 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:55.597004 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:23:55.617897 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:23:55.617942 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:55.626074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:23:55.626119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:55.657129 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:23:55.657270 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:55.675181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:23:55.675348 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:55.683587 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:55.719091 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:23:55.719167 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:55.742488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:23:55.742533 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:55.751100 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:23:55.751136 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:55.778979 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:23:55.779208 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:55.819935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:23:55.820195 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:55.858888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:55.859139 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:55.903211 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:23:55.914133 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:23:56.133929 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Jan 17 12:23:55.914288 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:55.953105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:55.953246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:55.976163 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:23:55.976482 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:23:56.015609 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:23:56.015963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:23:56.035111 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:23:56.069942 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:23:56.074439 systemd[1]: Switching root. Jan 17 12:23:56.229803 systemd-journald[268]: Journal stopped Jan 17 12:23:58.815176 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:23:58.815190 kernel: SELinux: policy capability open_perms=1 Jan 17 12:23:58.815197 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:23:58.815204 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:23:58.815209 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:23:58.815214 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:23:58.815220 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:23:58.815226 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:23:58.815231 kernel: audit: type=1403 audit(1737116636.428:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:23:58.815238 systemd[1]: Successfully loaded SELinux policy in 155.481ms. Jan 17 12:23:58.815246 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.933ms. Jan 17 12:23:58.815253 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:23:58.815259 systemd[1]: Detected architecture x86-64. Jan 17 12:23:58.815265 systemd[1]: Detected first boot. Jan 17 12:23:58.815273 systemd[1]: Hostname set to <ci-4081.3.0-a-4c6521d577>. Jan 17 12:23:58.815281 systemd[1]: Initializing machine ID from random generator. Jan 17 12:23:58.815287 zram_generator::config[1273]: No configuration found. Jan 17 12:23:58.815294 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:23:58.815300 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:23:58.815307 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:23:58.815313 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:23:58.815320 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:23:58.815327 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:23:58.815333 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:23:58.815340 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:23:58.815347 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:23:58.815353 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:23:58.815360 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:23:58.815366 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:23:58.815374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:58.815381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:58.815388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:23:58.815394 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:23:58.815400 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:23:58.815407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:23:58.815414 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 17 12:23:58.815420 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:58.815428 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:23:58.815434 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:23:58.815441 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:23:58.815449 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:23:58.815456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:58.815462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:23:58.815469 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:23:58.815477 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:23:58.815483 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:23:58.815490 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:23:58.815497 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:58.815503 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:58.815510 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:58.815518 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:23:58.815525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:23:58.815532 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:23:58.815539 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:23:58.815545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:58.815552 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:23:58.815560 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:23:58.815568 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:23:58.815575 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:23:58.815582 systemd[1]: Reached target machines.target - Containers. Jan 17 12:23:58.815589 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:23:58.815596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:58.815602 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:23:58.815609 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:23:58.815616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:58.815623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:23:58.815631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:58.815638 kernel: ACPI: bus type drm_connector registered Jan 17 12:23:58.815644 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:23:58.815650 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:58.815657 kernel: loop: module loaded Jan 17 12:23:58.815663 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:23:58.815670 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:23:58.815677 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:23:58.815684 kernel: fuse: init (API version 7.39) Jan 17 12:23:58.815691 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:23:58.815698 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:23:58.815704 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:23:58.815723 systemd-journald[1377]: Collecting audit messages is disabled. Jan 17 12:23:58.815739 systemd-journald[1377]: Journal started Jan 17 12:23:58.815753 systemd-journald[1377]: Runtime Journal (/run/log/journal/d59702600c28447ab382b3af16d37bc2) is 8.0M, max 639.9M, 631.9M free. Jan 17 12:23:56.941879 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:23:56.954887 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:23:56.955197 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:23:58.843721 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:23:58.888881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:23:58.928774 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:23:58.961761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:23:58.995258 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:23:58.995288 systemd[1]: Stopped verity-setup.service. Jan 17 12:23:59.064696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:59.064745 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:23:59.088401 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:23:59.099019 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:23:59.108857 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:23:59.118983 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:23:59.128951 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:23:59.138959 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:23:59.149076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:23:59.160234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:59.171360 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:23:59.171606 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:23:59.183653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:59.184076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:59.195635 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:23:59.196055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:23:59.206637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:59.207046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:59.218667 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:23:59.219081 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:23:59.229631 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:59.230099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:59.240645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:59.251743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:23:59.263588 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:23:59.275778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:59.311314 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:23:59.341039 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:23:59.354769 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:23:59.365980 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:23:59.365998 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:59.366553 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:23:59.389558 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:23:59.408395 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:23:59.419026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:59.420834 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:23:59.432659 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:23:59.443829 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:23:59.444471 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:23:59.448714 systemd-journald[1377]: Time spent on flushing to /var/log/journal/d59702600c28447ab382b3af16d37bc2 is 13.188ms for 1369 entries. Jan 17 12:23:59.448714 systemd-journald[1377]: System Journal (/var/log/journal/d59702600c28447ab382b3af16d37bc2) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:23:59.494503 systemd-journald[1377]: Received client request to flush runtime journal. Jan 17 12:23:59.462852 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:23:59.463502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:23:59.472510 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:23:59.497354 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:23:59.514490 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:23:59.522722 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:23:59.523274 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:23:59.543960 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:23:59.562033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:23:59.568721 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:23:59.581024 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:23:59.591996 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:23:59.607067 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:59.620722 kernel: loop1: detected capacity change from 0 to 205544 Jan 17 12:23:59.629902 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:23:59.643640 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:23:59.660937 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:23:59.672482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:23:59.689183 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:23:59.690314 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:23:59.700723 kernel: loop2: detected capacity change from 0 to 8 Jan 17 12:23:59.701237 udevadm[1413]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:23:59.743526 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Jan 17 12:23:59.743539 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Jan 17 12:23:59.745844 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:59.773726 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:23:59.774003 ldconfig[1403]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:23:59.775518 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:23:59.844810 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:23:59.872071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:23:59.880775 kernel: loop5: detected capacity change from 0 to 205544 Jan 17 12:23:59.913775 kernel: loop6: detected capacity change from 0 to 8 Jan 17 12:23:59.931776 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 12:23:59.932893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:59.943524 (sd-merge)[1433]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 17 12:23:59.943814 (sd-merge)[1433]: Merged extensions into '/usr'. Jan 17 12:23:59.945386 systemd-udevd[1435]: Using default interface naming scheme 'v255'. Jan 17 12:23:59.945966 systemd[1]: Reloading requested from client PID 1408 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:23:59.945975 systemd[1]: Reloading... Jan 17 12:23:59.984728 zram_generator::config[1472]: No configuration found. Jan 17 12:23:59.984788 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1445) Jan 17 12:24:00.015727 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 17 12:24:00.015789 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:24:00.015813 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 12:24:00.062735 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 12:24:00.110729 kernel: IPMI message handler: version 39.2 Jan 17 12:24:00.110785 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:24:00.136727 kernel: ipmi device interface Jan 17 12:24:00.136767 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 17 12:24:00.185571 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 17 12:24:00.185664 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 17 12:24:00.189873 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 17 12:24:00.234678 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 17 12:24:00.145062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:24:00.199237 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 17 12:24:00.199335 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 17 12:24:00.251146 systemd[1]: Reloading finished in 304 ms. Jan 17 12:24:00.253730 kernel: iTCO_vendor_support: vendor-support=0 Jan 17 12:24:00.280692 kernel: ipmi_si: IPMI System Interface driver Jan 17 12:24:00.280746 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 17 12:24:00.325800 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 17 12:24:00.325817 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 17 12:24:00.325830 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 17 12:24:00.395275 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 17 12:24:00.395360 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 17 12:24:00.395436 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 17 12:24:00.395446 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 17 12:24:00.445759 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 17 12:24:00.485104 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 17 12:24:00.485218 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 17 12:24:00.485290 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 17 12:24:00.457437 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:24:00.517723 kernel: intel_rapl_common: Found RAPL domain package Jan 17 12:24:00.533944 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:24:00.534726 kernel: intel_rapl_common: Found RAPL domain core Jan 17 12:24:00.534834 kernel: intel_rapl_common: Found RAPL domain dram Jan 17 12:24:00.581745 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 17 12:24:00.598726 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 17 12:24:00.612842 systemd[1]: Starting ensure-sysext.service... Jan 17 12:24:00.621277 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:24:00.632565 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:24:00.642271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:24:00.642922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:24:00.643152 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:24:00.644842 systemd[1]: Reloading requested from client PID 1612 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:24:00.644849 systemd[1]: Reloading... Jan 17 12:24:00.681760 zram_generator::config[1643]: No configuration found. Jan 17 12:24:00.703831 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:24:00.704057 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:24:00.704590 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:24:00.704779 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Jan 17 12:24:00.704817 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Jan 17 12:24:00.706408 systemd-tmpfiles[1616]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:24:00.706412 systemd-tmpfiles[1616]: Skipping /boot Jan 17 12:24:00.710601 systemd-tmpfiles[1616]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:24:00.710605 systemd-tmpfiles[1616]: Skipping /boot Jan 17 12:24:00.734681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:24:00.787764 systemd[1]: Reloading finished in 142 ms. Jan 17 12:24:00.811984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:24:00.823946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:24:00.835897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:24:00.859912 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:24:00.870673 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:24:00.877170 augenrules[1725]: No rules Jan 17 12:24:00.882536 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:24:00.895421 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:24:00.901669 lvm[1730]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:24:00.917359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:24:00.927486 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:24:00.939693 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:24:00.949412 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:24:00.959042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:24:00.973827 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:24:00.985032 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:24:00.996045 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:24:01.006492 systemd-networkd[1614]: lo: Link UP Jan 17 12:24:01.006495 systemd-networkd[1614]: lo: Gained carrier Jan 17 12:24:01.008988 systemd-networkd[1614]: bond0: netdev ready Jan 17 12:24:01.009934 systemd-networkd[1614]: Enumeration completed Jan 17 12:24:01.010432 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:24:01.020145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:24:01.021103 systemd-networkd[1614]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:50.network. Jan 17 12:24:01.029909 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:24:01.030054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:24:01.030882 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:24:01.042417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:24:01.044317 lvm[1749]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:24:01.052556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:24:01.053310 systemd-resolved[1732]: Positive Trust Anchors: Jan 17 12:24:01.053318 systemd-resolved[1732]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:24:01.053342 systemd-resolved[1732]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:24:01.056933 systemd-resolved[1732]: Using system hostname 'ci-4081.3.0-a-4c6521d577'. Jan 17 12:24:01.064504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:24:01.073869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:24:01.074706 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:24:01.086534 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:24:01.095821 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:24:01.095946 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:24:01.097463 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:24:01.109162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:24:01.109236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:24:01.120128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:24:01.120197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:24:01.131111 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:24:01.131182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:24:01.141082 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:24:01.151849 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:24:01.165206 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:24:01.165327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:24:01.177894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:24:01.188355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:24:01.199327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:24:01.208843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:24:01.208948 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:24:01.208997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:24:01.209646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:24:01.209724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:24:01.221021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:24:01.221089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:24:01.231988 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:24:01.232056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:24:01.244033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:24:01.244157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:24:01.253926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:24:01.264423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:24:01.282950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:24:01.294377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:24:01.303873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:24:01.303986 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:24:01.304086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:24:01.304709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:24:01.304823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:24:01.317049 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:24:01.317118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:24:01.326995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:24:01.327063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:24:01.338001 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:24:01.338068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:24:01.348685 systemd[1]: Finished ensure-sysext.service. Jan 17 12:24:01.358234 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:24:01.358267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:24:01.366887 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:24:01.408448 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:24:01.419827 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:24:01.551781 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 17 12:24:01.574159 systemd-networkd[1614]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:51.network. Jan 17 12:24:01.574797 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 17 12:24:01.792786 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 17 12:24:01.813948 systemd-networkd[1614]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 17 12:24:01.814734 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 17 12:24:01.815617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:24:01.815942 systemd-networkd[1614]: enp1s0f0np0: Link UP Jan 17 12:24:01.816329 systemd-networkd[1614]: enp1s0f0np0: Gained carrier Jan 17 12:24:01.835734 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 17 12:24:01.844494 systemd-networkd[1614]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f6:50.network. Jan 17 12:24:01.844830 systemd-networkd[1614]: enp1s0f1np1: Link UP Jan 17 12:24:01.844951 systemd[1]: Reached target network.target - Network. Jan 17 12:24:01.845182 systemd-networkd[1614]: enp1s0f1np1: Gained carrier Jan 17 12:24:01.852863 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:24:01.855023 systemd-networkd[1614]: bond0: Link UP Jan 17 12:24:01.855382 systemd-networkd[1614]: bond0: Gained carrier Jan 17 12:24:01.855665 systemd-timesyncd[1777]: Network configuration changed, trying to establish connection. Jan 17 12:24:01.863925 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:24:01.874135 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:24:01.885036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:24:01.896291 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:24:01.906183 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:24:01.916962 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:24:01.935919 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:24:01.935999 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:24:01.952804 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 17 12:24:01.952895 kernel: bond0: active interface up! Jan 17 12:24:01.980871 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:24:01.989926 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:24:02.002844 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:24:02.017325 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:24:02.028507 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:24:02.038191 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:24:02.047943 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:24:02.056039 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:24:02.056115 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:24:02.089780 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 17 12:24:02.089942 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:24:02.103169 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:24:02.115454 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:24:02.126270 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:24:02.137525 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:24:02.140221 jq[1787]: false Jan 17 12:24:02.145189 dbus-daemon[1784]: [system] SELinux support is enabled Jan 17 12:24:02.145852 coreos-metadata[1783]: Jan 17 12:24:02.145 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 17 12:24:02.146865 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:24:02.147498 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:24:02.154661 extend-filesystems[1788]: Found loop4 Jan 17 12:24:02.154661 extend-filesystems[1788]: Found loop5 Jan 17 12:24:02.212909 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jan 17 12:24:02.212928 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1445) Jan 17 12:24:02.212938 extend-filesystems[1788]: Found loop6 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found loop7 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda1 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda2 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda3 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found usr Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda4 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda6 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda7 Jan 17 12:24:02.212938 extend-filesystems[1788]: Found sda9 Jan 17 12:24:02.212938 extend-filesystems[1788]: Checking size of /dev/sda9 Jan 17 12:24:02.212938 extend-filesystems[1788]: Resized partition /dev/sda9 Jan 17 12:24:02.157515 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:24:02.355912 extend-filesystems[1799]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:24:02.203322 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:24:02.228535 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:24:02.236170 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:24:02.283834 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 17 12:24:02.366292 sshd_keygen[1812]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:24:02.305284 systemd-logind[1809]: Watching system buttons on /dev/input/event3 (Power Button) Jan 17 12:24:02.366479 update_engine[1814]: I20250117 12:24:02.338335 1814 main.cc:92] Flatcar Update Engine starting Jan 17 12:24:02.366479 update_engine[1814]: I20250117 12:24:02.339124 1814 update_check_scheduler.cc:74] Next update check in 3m49s Jan 17 12:24:02.305294 systemd-logind[1809]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 12:24:02.305307 systemd-logind[1809]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 17 12:24:02.305426 systemd-logind[1809]: New seat seat0. Jan 17 12:24:02.316126 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:24:02.326840 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:24:02.365861 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:24:02.367405 jq[1820]: true Jan 17 12:24:02.383060 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:24:02.393992 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:24:02.411975 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:24:02.412094 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:24:02.412280 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:24:02.412392 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:24:02.422261 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:24:02.422369 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:24:02.432959 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:24:02.445989 (ntainerd)[1827]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:24:02.447102 jq[1826]: true Jan 17 12:24:02.449154 dbus-daemon[1784]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:24:02.450633 tar[1824]: linux-amd64/helm Jan 17 12:24:02.455421 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 17 12:24:02.455540 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 17 12:24:02.458812 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:24:02.469170 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:24:02.476790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:24:02.476891 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:24:02.487829 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:24:02.487908 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:24:02.499606 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:24:02.504052 bash[1855]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:24:02.510702 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:24:02.522092 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:24:02.522184 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:24:02.533452 locksmithd[1862]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:24:02.551006 systemd[1]: Starting sshkeys.service... Jan 17 12:24:02.558558 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:24:02.570627 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:24:02.582617 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:24:02.594112 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:24:02.607492 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:24:02.616923 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 17 12:24:02.617084 containerd[1827]: time="2025-01-17T12:24:02.617002070Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:24:02.617911 coreos-metadata[1882]: Jan 17 12:24:02.617 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 17 12:24:02.627049 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:24:02.630419 containerd[1827]: time="2025-01-17T12:24:02.630398753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631231 containerd[1827]: time="2025-01-17T12:24:02.631211658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631231 containerd[1827]: time="2025-01-17T12:24:02.631228753Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:24:02.631271 containerd[1827]: time="2025-01-17T12:24:02.631238398Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:24:02.631332 containerd[1827]: time="2025-01-17T12:24:02.631323210Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:24:02.631358 containerd[1827]: time="2025-01-17T12:24:02.631333621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631375 containerd[1827]: time="2025-01-17T12:24:02.631365950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631394 containerd[1827]: time="2025-01-17T12:24:02.631374043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631474 containerd[1827]: time="2025-01-17T12:24:02.631463804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631491 containerd[1827]: time="2025-01-17T12:24:02.631473389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631491 containerd[1827]: time="2025-01-17T12:24:02.631481846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631491 containerd[1827]: time="2025-01-17T12:24:02.631487691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631534 containerd[1827]: time="2025-01-17T12:24:02.631527569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631650 containerd[1827]: time="2025-01-17T12:24:02.631642374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631704 containerd[1827]: time="2025-01-17T12:24:02.631695967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:24:02.631726 containerd[1827]: time="2025-01-17T12:24:02.631704709Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:24:02.631761 containerd[1827]: time="2025-01-17T12:24:02.631753705Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:24:02.631787 containerd[1827]: time="2025-01-17T12:24:02.631780682Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:24:02.642566 containerd[1827]: time="2025-01-17T12:24:02.642525750Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:24:02.642566 containerd[1827]: time="2025-01-17T12:24:02.642547663Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:24:02.642566 containerd[1827]: time="2025-01-17T12:24:02.642556906Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:24:02.642566 containerd[1827]: time="2025-01-17T12:24:02.642565816Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:24:02.642634 containerd[1827]: time="2025-01-17T12:24:02.642581306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:24:02.642658 containerd[1827]: time="2025-01-17T12:24:02.642651714Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:24:02.642826 containerd[1827]: time="2025-01-17T12:24:02.642786667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:24:02.642854 containerd[1827]: time="2025-01-17T12:24:02.642845693Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:24:02.642870 containerd[1827]: time="2025-01-17T12:24:02.642854808Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:24:02.642870 containerd[1827]: time="2025-01-17T12:24:02.642861891Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:24:02.642903 containerd[1827]: time="2025-01-17T12:24:02.642869128Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642903 containerd[1827]: time="2025-01-17T12:24:02.642876655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642903 containerd[1827]: time="2025-01-17T12:24:02.642883446Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642903 containerd[1827]: time="2025-01-17T12:24:02.642890562Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642903 containerd[1827]: time="2025-01-17T12:24:02.642897965Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642904990Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642912777Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642919065Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642930144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642937875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642944766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642951730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642958466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.642969 containerd[1827]: time="2025-01-17T12:24:02.642965615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.642972008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.642978668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.642985516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.642994599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643001347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643008196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643014546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643022477Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643035474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643042527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643053283Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:24:02.643088 containerd[1827]: time="2025-01-17T12:24:02.643080240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:24:02.643240 containerd[1827]: time="2025-01-17T12:24:02.643093762Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:24:02.643240 containerd[1827]: time="2025-01-17T12:24:02.643100887Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:24:02.643240 containerd[1827]: time="2025-01-17T12:24:02.643107770Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:24:02.643240 containerd[1827]: time="2025-01-17T12:24:02.643113527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643240 containerd[1827]: time="2025-01-17T12:24:02.643119986Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:24:02.643240 containerd[1827]: time="2025-01-17T12:24:02.643128374Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:24:02.643240 containerd[1827]: time="2025-01-17T12:24:02.643134228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:24:02.643335 containerd[1827]: time="2025-01-17T12:24:02.643299247Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:24:02.643335 containerd[1827]: time="2025-01-17T12:24:02.643332320Z" level=info msg="Connect containerd service" Jan 17 12:24:02.643427 containerd[1827]: time="2025-01-17T12:24:02.643350110Z" level=info msg="using legacy CRI server" Jan 17 12:24:02.643427 containerd[1827]: time="2025-01-17T12:24:02.643354116Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:24:02.643427 containerd[1827]: time="2025-01-17T12:24:02.643404629Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:24:02.644059 containerd[1827]: time="2025-01-17T12:24:02.644044146Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:24:02.644158 containerd[1827]: time="2025-01-17T12:24:02.644138526Z" level=info msg="Start subscribing containerd event" Jan 17 12:24:02.644194 containerd[1827]: time="2025-01-17T12:24:02.644169789Z" level=info msg="Start recovering state" Jan 17 12:24:02.644219 containerd[1827]: time="2025-01-17T12:24:02.644203425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:24:02.644219 containerd[1827]: time="2025-01-17T12:24:02.644207234Z" level=info msg="Start event monitor" Jan 17 12:24:02.644219 containerd[1827]: time="2025-01-17T12:24:02.644217630Z" level=info msg="Start snapshots syncer" Jan 17 12:24:02.644284 containerd[1827]: time="2025-01-17T12:24:02.644222894Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:24:02.644284 containerd[1827]: time="2025-01-17T12:24:02.644226793Z" level=info msg="Start streaming server" Jan 17 12:24:02.644284 containerd[1827]: time="2025-01-17T12:24:02.644231435Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:24:02.644284 containerd[1827]: time="2025-01-17T12:24:02.644272059Z" level=info msg="containerd successfully booted in 0.027842s" Jan 17 12:24:02.644301 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:24:02.691726 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jan 17 12:24:02.713984 extend-filesystems[1799]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 12:24:02.713984 extend-filesystems[1799]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 17 12:24:02.713984 extend-filesystems[1799]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jan 17 12:24:02.755755 extend-filesystems[1788]: Resized filesystem in /dev/sda9 Jan 17 12:24:02.755755 extend-filesystems[1788]: Found sdb Jan 17 12:24:02.764800 tar[1824]: linux-amd64/LICENSE Jan 17 12:24:02.764800 tar[1824]: linux-amd64/README.md Jan 17 12:24:02.714864 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:24:02.714967 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:24:02.784778 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:24:03.329804 systemd-networkd[1614]: bond0: Gained IPv6LL Jan 17 12:24:03.331254 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:24:03.343482 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:24:03.369975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:03.380478 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:24:03.398490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:24:04.067498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:04.080276 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:24:04.538257 kubelet[1925]: E0117 12:24:04.538171 1925 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:24:04.539319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:24:04.539397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:24:06.508443 systemd-resolved[1732]: Clock change detected. Flushing caches. Jan 17 12:24:06.508650 systemd-timesyncd[1777]: Contacted time server 208.113.130.146:123 (0.flatcar.pool.ntp.org). Jan 17 12:24:06.508790 systemd-timesyncd[1777]: Initial clock synchronization to Fri 2025-01-17 12:24:06.508325 UTC. Jan 17 12:24:06.673333 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:24:06.689284 systemd[1]: Started sshd@0-147.75.90.1:22-147.75.109.163:56472.service - OpenSSH per-connection server daemon (147.75.109.163:56472). Jan 17 12:24:06.747994 sshd[1945]: Accepted publickey for core from 147.75.109.163 port 56472 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:06.748950 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:06.754489 systemd-logind[1809]: New session 1 of user core. Jan 17 12:24:06.755329 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:24:06.782250 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:24:06.796714 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:24:06.825318 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:24:06.835954 (systemd)[1949]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:24:06.908438 systemd[1949]: Queued start job for default target default.target. Jan 17 12:24:06.917576 systemd[1949]: Created slice app.slice - User Application Slice. Jan 17 12:24:06.917590 systemd[1949]: Reached target paths.target - Paths. Jan 17 12:24:06.917598 systemd[1949]: Reached target timers.target - Timers. Jan 17 12:24:06.918235 systemd[1949]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:24:06.923766 systemd[1949]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:24:06.923794 systemd[1949]: Reached target sockets.target - Sockets. Jan 17 12:24:06.923804 systemd[1949]: Reached target basic.target - Basic System. Jan 17 12:24:06.923825 systemd[1949]: Reached target default.target - Main User Target. Jan 17 12:24:06.923840 systemd[1949]: Startup finished in 83ms. Jan 17 12:24:06.923950 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:24:06.933893 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:24:07.005512 systemd[1]: Started sshd@1-147.75.90.1:22-147.75.109.163:56478.service - OpenSSH per-connection server daemon (147.75.109.163:56478). Jan 17 12:24:07.044253 sshd[1960]: Accepted publickey for core from 147.75.109.163 port 56478 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:07.044884 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:07.047255 systemd-logind[1809]: New session 2 of user core. Jan 17 12:24:07.069166 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:24:07.077008 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 17 12:24:07.077171 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 17 12:24:07.151970 sshd[1960]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.168612 systemd[1]: sshd@1-147.75.90.1:22-147.75.109.163:56478.service: Deactivated successfully. Jan 17 12:24:07.169356 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:24:07.169966 systemd-logind[1809]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:24:07.170627 systemd[1]: Started sshd@2-147.75.90.1:22-147.75.109.163:56484.service - OpenSSH per-connection server daemon (147.75.109.163:56484). Jan 17 12:24:07.183311 systemd-logind[1809]: Removed session 2. Jan 17 12:24:07.211393 sshd[1969]: Accepted publickey for core from 147.75.109.163 port 56484 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:07.212469 sshd[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:07.216080 systemd-logind[1809]: New session 3 of user core. Jan 17 12:24:07.239579 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:24:07.302772 sshd[1969]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.304053 systemd[1]: sshd@2-147.75.90.1:22-147.75.109.163:56484.service: Deactivated successfully. Jan 17 12:24:07.304861 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:24:07.305547 systemd-logind[1809]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:24:07.305974 systemd-logind[1809]: Removed session 3. Jan 17 12:24:07.667519 coreos-metadata[1783]: Jan 17 12:24:07.667 INFO Fetch successful Jan 17 12:24:07.690402 coreos-metadata[1882]: Jan 17 12:24:07.690 INFO Fetch successful Jan 17 12:24:07.705454 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:24:07.716297 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 17 12:24:07.720291 unknown[1882]: wrote ssh authorized keys file for user: core Jan 17 12:24:07.738827 update-ssh-keys[1981]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:24:07.739219 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:24:07.750888 systemd[1]: Finished sshkeys.service. Jan 17 12:24:08.280350 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 17 12:24:08.295286 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:24:08.305679 systemd[1]: Startup finished in 2.671s (kernel) + 25.420s (initrd) + 10.671s (userspace) = 38.763s. Jan 17 12:24:08.323274 login[1893]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:24:08.325869 systemd-logind[1809]: New session 4 of user core. Jan 17 12:24:08.326606 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:24:08.341513 login[1884]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:24:08.344549 systemd-logind[1809]: New session 5 of user core. Jan 17 12:24:08.345074 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:24:13.692246 systemd[1]: Started sshd@3-147.75.90.1:22-218.92.0.158:42531.service - OpenSSH per-connection server daemon (218.92.0.158:42531). Jan 17 12:24:14.950126 sshd[2013]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:24:16.116450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:24:16.134257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:16.340527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:16.344179 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:24:16.379443 kubelet[2022]: E0117 12:24:16.379311 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:24:16.382915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:24:16.383072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:24:17.089325 sshd[2011]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:24:17.330319 systemd[1]: Started sshd@4-147.75.90.1:22-147.75.109.163:49240.service - OpenSSH per-connection server daemon (147.75.109.163:49240). Jan 17 12:24:17.359092 sshd[2043]: Accepted publickey for core from 147.75.109.163 port 49240 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:17.360137 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:17.363879 systemd-logind[1809]: New session 6 of user core. Jan 17 12:24:17.374337 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:24:17.423869 sshd[2041]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:24:17.429196 sshd[2043]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:17.446875 systemd[1]: sshd@4-147.75.90.1:22-147.75.109.163:49240.service: Deactivated successfully. Jan 17 12:24:17.447860 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:24:17.448539 systemd-logind[1809]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:24:17.449057 systemd[1]: Started sshd@5-147.75.90.1:22-147.75.109.163:49248.service - OpenSSH per-connection server daemon (147.75.109.163:49248). Jan 17 12:24:17.449500 systemd-logind[1809]: Removed session 6. Jan 17 12:24:17.479626 sshd[2050]: Accepted publickey for core from 147.75.109.163 port 49248 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:17.480453 sshd[2050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:17.483729 systemd-logind[1809]: New session 7 of user core. Jan 17 12:24:17.493250 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:24:17.544811 sshd[2050]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:17.553652 systemd[1]: sshd@5-147.75.90.1:22-147.75.109.163:49248.service: Deactivated successfully. Jan 17 12:24:17.554394 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:24:17.555081 systemd-logind[1809]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:24:17.555701 systemd[1]: Started sshd@6-147.75.90.1:22-147.75.109.163:49262.service - OpenSSH per-connection server daemon (147.75.109.163:49262). Jan 17 12:24:17.556171 systemd-logind[1809]: Removed session 7. Jan 17 12:24:17.586602 sshd[2058]: Accepted publickey for core from 147.75.109.163 port 49262 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:17.587467 sshd[2058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:17.590757 systemd-logind[1809]: New session 8 of user core. Jan 17 12:24:17.599300 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:24:17.662789 sshd[2058]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:17.679767 systemd[1]: sshd@6-147.75.90.1:22-147.75.109.163:49262.service: Deactivated successfully. Jan 17 12:24:17.683310 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:24:17.686811 systemd-logind[1809]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:24:17.707821 systemd[1]: Started sshd@7-147.75.90.1:22-147.75.109.163:49270.service - OpenSSH per-connection server daemon (147.75.109.163:49270). Jan 17 12:24:17.710229 systemd-logind[1809]: Removed session 8. Jan 17 12:24:17.765746 sshd[2065]: Accepted publickey for core from 147.75.109.163 port 49270 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:17.767488 sshd[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:17.773685 systemd-logind[1809]: New session 9 of user core. Jan 17 12:24:17.786363 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:24:17.855593 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:24:17.855744 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:17.867605 sudo[2068]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:17.868642 sshd[2065]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:17.880938 systemd[1]: sshd@7-147.75.90.1:22-147.75.109.163:49270.service: Deactivated successfully. Jan 17 12:24:17.881889 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:24:17.882855 systemd-logind[1809]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:24:17.883780 systemd[1]: Started sshd@8-147.75.90.1:22-147.75.109.163:49278.service - OpenSSH per-connection server daemon (147.75.109.163:49278). Jan 17 12:24:17.884465 systemd-logind[1809]: Removed session 9. Jan 17 12:24:17.921180 sshd[2073]: Accepted publickey for core from 147.75.109.163 port 49278 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:17.922411 sshd[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:17.926687 systemd-logind[1809]: New session 10 of user core. Jan 17 12:24:17.947584 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:24:18.008346 sudo[2077]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:24:18.008498 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:18.010590 sudo[2077]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:18.013237 sudo[2076]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:24:18.013388 sudo[2076]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:18.030347 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:24:18.031572 auditctl[2080]: No rules Jan 17 12:24:18.031802 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:24:18.031929 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:24:18.033589 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:24:18.049946 augenrules[2098]: No rules Jan 17 12:24:18.050303 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:24:18.050870 sudo[2076]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:18.051808 sshd[2073]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:18.053800 systemd[1]: sshd@8-147.75.90.1:22-147.75.109.163:49278.service: Deactivated successfully. Jan 17 12:24:18.054607 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:24:18.054971 systemd-logind[1809]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:24:18.055983 systemd[1]: Started sshd@9-147.75.90.1:22-147.75.109.163:49284.service - OpenSSH per-connection server daemon (147.75.109.163:49284). Jan 17 12:24:18.056593 systemd-logind[1809]: Removed session 10. Jan 17 12:24:18.090452 sshd[2106]: Accepted publickey for core from 147.75.109.163 port 49284 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:24:18.091898 sshd[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:18.097751 systemd-logind[1809]: New session 11 of user core. Jan 17 12:24:18.118793 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:24:18.188420 sudo[2109]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:24:18.189363 sudo[2109]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:18.551328 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:24:18.551400 (dockerd)[2134]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:24:18.798463 dockerd[2134]: time="2025-01-17T12:24:18.798395392Z" level=info msg="Starting up" Jan 17 12:24:18.964778 dockerd[2134]: time="2025-01-17T12:24:18.964724986Z" level=info msg="Loading containers: start." Jan 17 12:24:19.048087 kernel: Initializing XFRM netlink socket Jan 17 12:24:19.116922 systemd-networkd[1614]: docker0: Link UP Jan 17 12:24:19.148441 dockerd[2134]: time="2025-01-17T12:24:19.148383492Z" level=info msg="Loading containers: done." Jan 17 12:24:19.157800 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck874471612-merged.mount: Deactivated successfully. Jan 17 12:24:19.157921 dockerd[2134]: time="2025-01-17T12:24:19.157849752Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:24:19.157921 dockerd[2134]: time="2025-01-17T12:24:19.157895466Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:24:19.157970 dockerd[2134]: time="2025-01-17T12:24:19.157944929Z" level=info msg="Daemon has completed initialization" Jan 17 12:24:19.173648 dockerd[2134]: time="2025-01-17T12:24:19.173585860Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:24:19.173730 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:24:19.306217 sshd[2011]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:24:19.638392 sshd[2294]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:24:20.436649 containerd[1827]: time="2025-01-17T12:24:20.436625573Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 17 12:24:21.042812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345478443.mount: Deactivated successfully. Jan 17 12:24:21.753634 containerd[1827]: time="2025-01-17T12:24:21.753601152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:21.753906 containerd[1827]: time="2025-01-17T12:24:21.753770109Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 17 12:24:21.754326 containerd[1827]: time="2025-01-17T12:24:21.754311435Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:21.755800 containerd[1827]: time="2025-01-17T12:24:21.755785964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:21.756412 containerd[1827]: time="2025-01-17T12:24:21.756396383Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.319750014s" Jan 17 12:24:21.756456 containerd[1827]: time="2025-01-17T12:24:21.756415644Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 17 12:24:21.757529 containerd[1827]: time="2025-01-17T12:24:21.757517138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 17 12:24:22.464083 sshd[2011]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:24:22.629667 sshd[2011]: Received disconnect from 218.92.0.158 port 42531:11: [preauth] Jan 17 12:24:22.629667 sshd[2011]: Disconnected from authenticating user root 218.92.0.158 port 42531 [preauth] Jan 17 12:24:22.631148 systemd[1]: sshd@3-147.75.90.1:22-218.92.0.158:42531.service: Deactivated successfully. Jan 17 12:24:22.756540 containerd[1827]: time="2025-01-17T12:24:22.756449667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:22.756750 containerd[1827]: time="2025-01-17T12:24:22.756623382Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 17 12:24:22.757040 containerd[1827]: time="2025-01-17T12:24:22.757005677Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:22.758609 containerd[1827]: time="2025-01-17T12:24:22.758568970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:22.759207 containerd[1827]: time="2025-01-17T12:24:22.759166518Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.001634235s" Jan 17 12:24:22.759207 containerd[1827]: time="2025-01-17T12:24:22.759182004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 17 12:24:22.759412 containerd[1827]: time="2025-01-17T12:24:22.759400370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 17 12:24:23.645454 containerd[1827]: time="2025-01-17T12:24:23.645395476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:23.645574 containerd[1827]: time="2025-01-17T12:24:23.645551624Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 17 12:24:23.646119 containerd[1827]: time="2025-01-17T12:24:23.646102241Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:23.648901 containerd[1827]: time="2025-01-17T12:24:23.648878109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:23.649407 containerd[1827]: time="2025-01-17T12:24:23.649361718Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 889.945709ms" Jan 17 12:24:23.649407 containerd[1827]: time="2025-01-17T12:24:23.649380626Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 17 12:24:23.649716 containerd[1827]: time="2025-01-17T12:24:23.649658370Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:24:24.461465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582091650.mount: Deactivated successfully. Jan 17 12:24:24.683135 containerd[1827]: time="2025-01-17T12:24:24.683068480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:24.683344 containerd[1827]: time="2025-01-17T12:24:24.683224787Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 17 12:24:24.683542 containerd[1827]: time="2025-01-17T12:24:24.683501556Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:24.684538 containerd[1827]: time="2025-01-17T12:24:24.684497089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:24.684952 containerd[1827]: time="2025-01-17T12:24:24.684912220Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.0352387s" Jan 17 12:24:24.684952 containerd[1827]: time="2025-01-17T12:24:24.684926775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 17 12:24:24.685163 containerd[1827]: time="2025-01-17T12:24:24.685149071Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:24:25.151834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194527781.mount: Deactivated successfully. Jan 17 12:24:25.636117 containerd[1827]: time="2025-01-17T12:24:25.636093067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:25.636295 containerd[1827]: time="2025-01-17T12:24:25.636274114Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:24:25.636696 containerd[1827]: time="2025-01-17T12:24:25.636661857Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:25.638584 containerd[1827]: time="2025-01-17T12:24:25.638539012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:25.639132 containerd[1827]: time="2025-01-17T12:24:25.639099237Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 953.934962ms" Jan 17 12:24:25.639132 containerd[1827]: time="2025-01-17T12:24:25.639116107Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:24:25.639367 containerd[1827]: time="2025-01-17T12:24:25.639351475Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 12:24:26.108421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814015149.mount: Deactivated successfully. Jan 17 12:24:26.109767 containerd[1827]: time="2025-01-17T12:24:26.109750437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:26.109914 containerd[1827]: time="2025-01-17T12:24:26.109891175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 12:24:26.110399 containerd[1827]: time="2025-01-17T12:24:26.110358754Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:26.111655 containerd[1827]: time="2025-01-17T12:24:26.111613522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:26.112166 containerd[1827]: time="2025-01-17T12:24:26.112090820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.724623ms" Jan 17 12:24:26.112166 containerd[1827]: time="2025-01-17T12:24:26.112133674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 12:24:26.112443 containerd[1827]: time="2025-01-17T12:24:26.112404876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 17 12:24:26.584771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:24:26.596226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:26.596931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632401506.mount: Deactivated successfully. Jan 17 12:24:26.814249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:26.816395 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:24:26.839368 kubelet[2445]: E0117 12:24:26.839270 2445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:24:26.840359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:24:26.840435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:24:27.831527 containerd[1827]: time="2025-01-17T12:24:27.831473731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:27.831739 containerd[1827]: time="2025-01-17T12:24:27.831688694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 17 12:24:27.832205 containerd[1827]: time="2025-01-17T12:24:27.832164148Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:27.834865 containerd[1827]: time="2025-01-17T12:24:27.834820925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:27.835424 containerd[1827]: time="2025-01-17T12:24:27.835406182Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.722963972s" Jan 17 12:24:27.835477 containerd[1827]: time="2025-01-17T12:24:27.835426816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 17 12:24:29.651645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:29.668327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:29.681940 systemd[1]: Reloading requested from client PID 2554 ('systemctl') (unit session-11.scope)... Jan 17 12:24:29.681949 systemd[1]: Reloading... Jan 17 12:24:29.723087 zram_generator::config[2593]: No configuration found. Jan 17 12:24:29.787408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:24:29.846759 systemd[1]: Reloading finished in 164 ms. Jan 17 12:24:29.886448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:29.887692 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:29.888647 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:24:29.888746 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:29.889581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:30.094138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:30.098817 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:24:30.118747 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:24:30.118747 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:24:30.118747 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:24:30.118974 kubelet[2663]: I0117 12:24:30.118769 2663 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:24:30.528140 kubelet[2663]: I0117 12:24:30.528098 2663 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:24:30.528140 kubelet[2663]: I0117 12:24:30.528111 2663 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:24:30.528272 kubelet[2663]: I0117 12:24:30.528232 2663 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:24:30.541940 kubelet[2663]: I0117 12:24:30.541925 2663 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:24:30.542714 kubelet[2663]: E0117 12:24:30.542701 2663 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.90.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.90.1:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:24:30.547681 kubelet[2663]: E0117 12:24:30.547666 2663 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:24:30.547681 kubelet[2663]: I0117 12:24:30.547681 2663 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:24:30.557297 kubelet[2663]: I0117 12:24:30.557287 2663 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:24:30.557385 kubelet[2663]: I0117 12:24:30.557363 2663 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:24:30.557492 kubelet[2663]: I0117 12:24:30.557477 2663 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:24:30.557596 kubelet[2663]: I0117 12:24:30.557491 2663 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-4c6521d577","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:24:30.557680 kubelet[2663]: I0117 12:24:30.557603 2663 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:24:30.557680 kubelet[2663]: I0117 12:24:30.557612 2663 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:24:30.557680 kubelet[2663]: I0117 12:24:30.557673 2663 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:24:30.559251 kubelet[2663]: I0117 12:24:30.559242 2663 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:24:30.559287 kubelet[2663]: I0117 12:24:30.559253 2663 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:24:30.559287 kubelet[2663]: I0117 12:24:30.559274 2663 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:24:30.559287 kubelet[2663]: I0117 12:24:30.559285 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:24:30.563794 kubelet[2663]: I0117 12:24:30.563737 2663 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:24:30.565019 kubelet[2663]: W0117 12:24:30.564963 2663 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.90.1:6443: connect: connection refused Jan 17 12:24:30.565083 kubelet[2663]: E0117 12:24:30.565057 2663 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.90.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.90.1:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:24:30.566256 kubelet[2663]: I0117 12:24:30.566227 2663 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:24:30.566304 kubelet[2663]: W0117 12:24:30.566280 2663 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:24:30.566339 kubelet[2663]: W0117 12:24:30.566294 2663 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4c6521d577&limit=500&resourceVersion=0": dial tcp 147.75.90.1:6443: connect: connection refused Jan 17 12:24:30.566339 kubelet[2663]: E0117 12:24:30.566331 2663 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.90.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4c6521d577&limit=500&resourceVersion=0\": dial tcp 147.75.90.1:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:24:30.566600 kubelet[2663]: I0117 12:24:30.566591 2663 server.go:1269] "Started kubelet" Jan 17 12:24:30.566696 kubelet[2663]: I0117 12:24:30.566673 2663 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:24:30.566729 kubelet[2663]: I0117 12:24:30.566672 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:24:30.566865 kubelet[2663]: I0117 12:24:30.566857 2663 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:24:30.567492 kubelet[2663]: I0117 12:24:30.567485 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:24:30.567540 kubelet[2663]: I0117 12:24:30.567529 2663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:24:30.567572 kubelet[2663]: E0117 12:24:30.567555 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:30.567572 kubelet[2663]: I0117 12:24:30.567559 2663 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:24:30.567634 kubelet[2663]: I0117 12:24:30.567579 2663 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:24:30.567690 kubelet[2663]: I0117 12:24:30.567675 2663 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:24:30.567730 kubelet[2663]: E0117 12:24:30.567703 2663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-4c6521d577?timeout=10s\": dial tcp 147.75.90.1:6443: connect: connection refused" interval="200ms" Jan 17 12:24:30.567759 kubelet[2663]: I0117 12:24:30.567730 2663 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:24:30.567779 kubelet[2663]: W0117 12:24:30.567747 2663 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.1:6443: connect: connection refused Jan 17 12:24:30.567799 kubelet[2663]: E0117 12:24:30.567782 2663 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.90.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.90.1:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:24:30.567825 kubelet[2663]: I0117 12:24:30.567818 2663 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:24:30.567825 kubelet[2663]: E0117 12:24:30.567821 2663 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:24:30.567877 kubelet[2663]: I0117 12:24:30.567867 2663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:24:30.568334 kubelet[2663]: I0117 12:24:30.568326 2663 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:24:30.589194 kubelet[2663]: E0117 12:24:30.569658 2663 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.1:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.1:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-4c6521d577.181b7a68828787ce default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-4c6521d577,UID:ci-4081.3.0-a-4c6521d577,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-4c6521d577,},FirstTimestamp:2025-01-17 12:24:30.566565838 +0000 UTC m=+0.465645556,LastTimestamp:2025-01-17 12:24:30.566565838 +0000 UTC m=+0.465645556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-4c6521d577,}" Jan 17 12:24:30.592837 kubelet[2663]: I0117 12:24:30.592785 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:24:30.593511 kubelet[2663]: I0117 12:24:30.593469 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:24:30.593511 kubelet[2663]: I0117 12:24:30.593489 2663 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:24:30.593511 kubelet[2663]: I0117 12:24:30.593502 2663 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:24:30.593633 kubelet[2663]: E0117 12:24:30.593526 2663 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:24:30.595171 kubelet[2663]: W0117 12:24:30.595114 2663 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.1:6443: connect: connection refused Jan 17 12:24:30.595171 kubelet[2663]: E0117 12:24:30.595150 2663 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.90.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.90.1:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:24:30.649105 kubelet[2663]: I0117 12:24:30.649048 2663 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:24:30.649105 kubelet[2663]: I0117 12:24:30.649078 2663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:24:30.649105 kubelet[2663]: I0117 12:24:30.649094 2663 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:24:30.650311 kubelet[2663]: I0117 12:24:30.650274 2663 policy_none.go:49] "None policy: Start" Jan 17 12:24:30.650686 kubelet[2663]: I0117 12:24:30.650647 2663 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:24:30.650686 kubelet[2663]: I0117 12:24:30.650662 2663 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:24:30.653281 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:24:30.665547 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:24:30.667297 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:24:30.667654 kubelet[2663]: E0117 12:24:30.667619 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:30.679596 kubelet[2663]: I0117 12:24:30.679560 2663 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:24:30.679698 kubelet[2663]: I0117 12:24:30.679651 2663 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:24:30.679698 kubelet[2663]: I0117 12:24:30.679658 2663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:24:30.679752 kubelet[2663]: I0117 12:24:30.679745 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:24:30.680192 kubelet[2663]: E0117 12:24:30.680182 2663 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:30.699641 systemd[1]: Created slice kubepods-burstable-pod47e0e243c647639e414cde91b2c5c937.slice - libcontainer container kubepods-burstable-pod47e0e243c647639e414cde91b2c5c937.slice. Jan 17 12:24:30.727280 systemd[1]: Created slice kubepods-burstable-pod17d07ed6284cbe5a6df2aa4dc2ee4537.slice - libcontainer container kubepods-burstable-pod17d07ed6284cbe5a6df2aa4dc2ee4537.slice. Jan 17 12:24:30.749087 systemd[1]: Created slice kubepods-burstable-podfc181a5fedaaebb4352103111a1a0068.slice - libcontainer container kubepods-burstable-podfc181a5fedaaebb4352103111a1a0068.slice. Jan 17 12:24:30.769039 kubelet[2663]: E0117 12:24:30.768930 2663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-4c6521d577?timeout=10s\": dial tcp 147.75.90.1:6443: connect: connection refused" interval="400ms" Jan 17 12:24:30.769039 kubelet[2663]: I0117 12:24:30.768961 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769413 kubelet[2663]: I0117 12:24:30.769121 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769413 kubelet[2663]: I0117 12:24:30.769183 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc181a5fedaaebb4352103111a1a0068-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-4c6521d577\" (UID: \"fc181a5fedaaebb4352103111a1a0068\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769413 kubelet[2663]: I0117 12:24:30.769227 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e0e243c647639e414cde91b2c5c937-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" (UID: \"47e0e243c647639e414cde91b2c5c937\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769413 kubelet[2663]: I0117 12:24:30.769292 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e0e243c647639e414cde91b2c5c937-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" (UID: \"47e0e243c647639e414cde91b2c5c937\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769413 kubelet[2663]: I0117 12:24:30.769361 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769899 kubelet[2663]: I0117 12:24:30.769439 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769899 kubelet[2663]: I0117 12:24:30.769500 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.769899 kubelet[2663]: I0117 12:24:30.769564 2663 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e0e243c647639e414cde91b2c5c937-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" (UID: \"47e0e243c647639e414cde91b2c5c937\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.783915 kubelet[2663]: I0117 12:24:30.783713 2663 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.784514 kubelet[2663]: E0117 12:24:30.784436 2663 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.90.1:6443/api/v1/nodes\": dial tcp 147.75.90.1:6443: connect: connection refused" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.988835 kubelet[2663]: I0117 12:24:30.988762 2663 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:30.989570 kubelet[2663]: E0117 12:24:30.989460 2663 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.90.1:6443/api/v1/nodes\": dial tcp 147.75.90.1:6443: connect: connection refused" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:31.024551 containerd[1827]: time="2025-01-17T12:24:31.024413528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-4c6521d577,Uid:47e0e243c647639e414cde91b2c5c937,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:31.042766 containerd[1827]: time="2025-01-17T12:24:31.042711536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-4c6521d577,Uid:17d07ed6284cbe5a6df2aa4dc2ee4537,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:31.054422 containerd[1827]: time="2025-01-17T12:24:31.054400149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-4c6521d577,Uid:fc181a5fedaaebb4352103111a1a0068,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:31.170612 kubelet[2663]: E0117 12:24:31.170456 2663 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-4c6521d577?timeout=10s\": dial tcp 147.75.90.1:6443: connect: connection refused" interval="800ms" Jan 17 12:24:31.391411 kubelet[2663]: I0117 12:24:31.391369 2663 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:31.391593 kubelet[2663]: E0117 12:24:31.391578 2663 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.90.1:6443/api/v1/nodes\": dial tcp 147.75.90.1:6443: connect: connection refused" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:31.480127 kubelet[2663]: W0117 12:24:31.480095 2663 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.1:6443: connect: connection refused Jan 17 12:24:31.480127 kubelet[2663]: E0117 12:24:31.480126 2663 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.90.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.90.1:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:24:31.496213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920230580.mount: Deactivated successfully. Jan 17 12:24:31.497459 containerd[1827]: time="2025-01-17T12:24:31.497412079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:31.497695 containerd[1827]: time="2025-01-17T12:24:31.497643551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:24:31.498640 containerd[1827]: time="2025-01-17T12:24:31.498599347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:31.498769 containerd[1827]: time="2025-01-17T12:24:31.498721843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:24:31.499389 containerd[1827]: time="2025-01-17T12:24:31.499371984Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:31.499783 containerd[1827]: time="2025-01-17T12:24:31.499761628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:24:31.501206 containerd[1827]: time="2025-01-17T12:24:31.501193715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:31.501599 containerd[1827]: time="2025-01-17T12:24:31.501584170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:31.502866 containerd[1827]: time="2025-01-17T12:24:31.502821137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.225067ms" Jan 17 12:24:31.504486 containerd[1827]: time="2025-01-17T12:24:31.504430979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 461.684243ms" Jan 17 12:24:31.504872 containerd[1827]: time="2025-01-17T12:24:31.504827379Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 450.388892ms" Jan 17 12:24:31.603888 containerd[1827]: time="2025-01-17T12:24:31.603835216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:31.603888 containerd[1827]: time="2025-01-17T12:24:31.603870552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:31.603888 containerd[1827]: time="2025-01-17T12:24:31.603877638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:31.603888 containerd[1827]: time="2025-01-17T12:24:31.603867392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:31.603888 containerd[1827]: time="2025-01-17T12:24:31.603888709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:31.604075 containerd[1827]: time="2025-01-17T12:24:31.603896089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:31.604075 containerd[1827]: time="2025-01-17T12:24:31.603930513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:31.604075 containerd[1827]: time="2025-01-17T12:24:31.603940341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:31.604075 containerd[1827]: time="2025-01-17T12:24:31.603977769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:31.604075 containerd[1827]: time="2025-01-17T12:24:31.603999609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:31.604075 containerd[1827]: time="2025-01-17T12:24:31.604013488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:31.604075 containerd[1827]: time="2025-01-17T12:24:31.604051040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:31.621121 systemd[1]: Started cri-containerd-8ac836668a7b74a915d040cc5df9f7d462453395bd90da7e420a2a4682baf7f4.scope - libcontainer container 8ac836668a7b74a915d040cc5df9f7d462453395bd90da7e420a2a4682baf7f4. Jan 17 12:24:31.621778 systemd[1]: Started cri-containerd-96d63f13eb265d4290d9062e6c2501159e87b650d699ae8e19b0f2d114d0f41d.scope - libcontainer container 96d63f13eb265d4290d9062e6c2501159e87b650d699ae8e19b0f2d114d0f41d. Jan 17 12:24:31.622450 systemd[1]: Started cri-containerd-b51a21ef333842c255a1539d923439f77519a432184f2533ae63b2ba72ababf2.scope - libcontainer container b51a21ef333842c255a1539d923439f77519a432184f2533ae63b2ba72ababf2. Jan 17 12:24:31.642915 containerd[1827]: time="2025-01-17T12:24:31.642812330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-4c6521d577,Uid:47e0e243c647639e414cde91b2c5c937,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ac836668a7b74a915d040cc5df9f7d462453395bd90da7e420a2a4682baf7f4\"" Jan 17 12:24:31.643648 containerd[1827]: time="2025-01-17T12:24:31.643630650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-4c6521d577,Uid:fc181a5fedaaebb4352103111a1a0068,Namespace:kube-system,Attempt:0,} returns sandbox id \"b51a21ef333842c255a1539d923439f77519a432184f2533ae63b2ba72ababf2\"" Jan 17 12:24:31.644306 containerd[1827]: time="2025-01-17T12:24:31.644292005Z" level=info msg="CreateContainer within sandbox \"8ac836668a7b74a915d040cc5df9f7d462453395bd90da7e420a2a4682baf7f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:24:31.644336 containerd[1827]: time="2025-01-17T12:24:31.644311895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-4c6521d577,Uid:17d07ed6284cbe5a6df2aa4dc2ee4537,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d63f13eb265d4290d9062e6c2501159e87b650d699ae8e19b0f2d114d0f41d\"" Jan 17 12:24:31.644454 containerd[1827]: time="2025-01-17T12:24:31.644443933Z" level=info msg="CreateContainer within sandbox \"b51a21ef333842c255a1539d923439f77519a432184f2533ae63b2ba72ababf2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:24:31.645121 containerd[1827]: time="2025-01-17T12:24:31.645110622Z" level=info msg="CreateContainer within sandbox \"96d63f13eb265d4290d9062e6c2501159e87b650d699ae8e19b0f2d114d0f41d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:24:31.652147 containerd[1827]: time="2025-01-17T12:24:31.652127383Z" level=info msg="CreateContainer within sandbox \"b51a21ef333842c255a1539d923439f77519a432184f2533ae63b2ba72ababf2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d278e3d8a94bef98fdda75e57499a01b1c8e91e1f47a84424829add4f824eef\"" Jan 17 12:24:31.652476 containerd[1827]: time="2025-01-17T12:24:31.652462628Z" level=info msg="StartContainer for \"3d278e3d8a94bef98fdda75e57499a01b1c8e91e1f47a84424829add4f824eef\"" Jan 17 12:24:31.653303 containerd[1827]: time="2025-01-17T12:24:31.653260277Z" level=info msg="CreateContainer within sandbox \"96d63f13eb265d4290d9062e6c2501159e87b650d699ae8e19b0f2d114d0f41d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ffd6c9dc1be0f29451dd91d75c80849642d001b69d57e9a3c06e8646e9fc098\"" Jan 17 12:24:31.653413 containerd[1827]: time="2025-01-17T12:24:31.653400099Z" level=info msg="StartContainer for \"6ffd6c9dc1be0f29451dd91d75c80849642d001b69d57e9a3c06e8646e9fc098\"" Jan 17 12:24:31.653625 containerd[1827]: time="2025-01-17T12:24:31.653585965Z" level=info msg="CreateContainer within sandbox \"8ac836668a7b74a915d040cc5df9f7d462453395bd90da7e420a2a4682baf7f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3fe04e8b8b2577a70f82f5d5ebf9d59f8637986e072434429d8b5862aacb22cd\"" Jan 17 12:24:31.653744 containerd[1827]: time="2025-01-17T12:24:31.653710567Z" level=info msg="StartContainer for \"3fe04e8b8b2577a70f82f5d5ebf9d59f8637986e072434429d8b5862aacb22cd\"" Jan 17 12:24:31.676317 systemd[1]: Started cri-containerd-3d278e3d8a94bef98fdda75e57499a01b1c8e91e1f47a84424829add4f824eef.scope - libcontainer container 3d278e3d8a94bef98fdda75e57499a01b1c8e91e1f47a84424829add4f824eef. Jan 17 12:24:31.676871 systemd[1]: Started cri-containerd-3fe04e8b8b2577a70f82f5d5ebf9d59f8637986e072434429d8b5862aacb22cd.scope - libcontainer container 3fe04e8b8b2577a70f82f5d5ebf9d59f8637986e072434429d8b5862aacb22cd. Jan 17 12:24:31.677387 systemd[1]: Started cri-containerd-6ffd6c9dc1be0f29451dd91d75c80849642d001b69d57e9a3c06e8646e9fc098.scope - libcontainer container 6ffd6c9dc1be0f29451dd91d75c80849642d001b69d57e9a3c06e8646e9fc098. Jan 17 12:24:31.694321 kubelet[2663]: W0117 12:24:31.694248 2663 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4c6521d577&limit=500&resourceVersion=0": dial tcp 147.75.90.1:6443: connect: connection refused Jan 17 12:24:31.694321 kubelet[2663]: E0117 12:24:31.694302 2663 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.90.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-4c6521d577&limit=500&resourceVersion=0\": dial tcp 147.75.90.1:6443: connect: connection refused" logger="UnhandledError" Jan 17 12:24:31.701224 containerd[1827]: time="2025-01-17T12:24:31.701196208Z" level=info msg="StartContainer for \"3d278e3d8a94bef98fdda75e57499a01b1c8e91e1f47a84424829add4f824eef\" returns successfully" Jan 17 12:24:31.701327 containerd[1827]: time="2025-01-17T12:24:31.701260254Z" level=info msg="StartContainer for \"3fe04e8b8b2577a70f82f5d5ebf9d59f8637986e072434429d8b5862aacb22cd\" returns successfully" Jan 17 12:24:31.701327 containerd[1827]: time="2025-01-17T12:24:31.701196447Z" level=info msg="StartContainer for \"6ffd6c9dc1be0f29451dd91d75c80849642d001b69d57e9a3c06e8646e9fc098\" returns successfully" Jan 17 12:24:32.193436 kubelet[2663]: I0117 12:24:32.193420 2663 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:32.267454 kubelet[2663]: E0117 12:24:32.267435 2663 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-4c6521d577\" not found" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:32.370949 kubelet[2663]: I0117 12:24:32.370925 2663 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:32.371029 kubelet[2663]: E0117 12:24:32.370955 2663 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-a-4c6521d577\": node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:32.376006 kubelet[2663]: E0117 12:24:32.375987 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:32.418341 kubelet[2663]: E0117 12:24:32.418285 2663 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-4c6521d577.181b7a68828787ce default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-4c6521d577,UID:ci-4081.3.0-a-4c6521d577,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-4c6521d577,},FirstTimestamp:2025-01-17 12:24:30.566565838 +0000 UTC m=+0.465645556,LastTimestamp:2025-01-17 12:24:30.566565838 +0000 UTC m=+0.465645556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-4c6521d577,}" Jan 17 12:24:32.474819 kubelet[2663]: E0117 12:24:32.474487 2663 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-4c6521d577.181b7a68829a9c8c default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-4c6521d577,UID:ci-4081.3.0-a-4c6521d577,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-4c6521d577,},FirstTimestamp:2025-01-17 12:24:30.567816332 +0000 UTC m=+0.466896051,LastTimestamp:2025-01-17 12:24:30.567816332 +0000 UTC m=+0.466896051,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-4c6521d577,}" Jan 17 12:24:32.476647 kubelet[2663]: E0117 12:24:32.476597 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:32.530581 kubelet[2663]: E0117 12:24:32.530342 2663 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-4c6521d577.181b7a68876c6c2a default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-4c6521d577,UID:ci-4081.3.0-a-4c6521d577,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.0-a-4c6521d577 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-4c6521d577,},FirstTimestamp:2025-01-17 12:24:30.64867537 +0000 UTC m=+0.547755097,LastTimestamp:2025-01-17 12:24:30.64867537 +0000 UTC m=+0.547755097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-4c6521d577,}" Jan 17 12:24:32.577139 kubelet[2663]: E0117 12:24:32.577085 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:32.678241 kubelet[2663]: E0117 12:24:32.678149 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:32.779369 kubelet[2663]: E0117 12:24:32.779141 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:32.880363 kubelet[2663]: E0117 12:24:32.880255 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:32.981602 kubelet[2663]: E0117 12:24:32.981499 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:33.082212 kubelet[2663]: E0117 12:24:33.081965 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:33.183209 kubelet[2663]: E0117 12:24:33.183097 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:33.284144 kubelet[2663]: E0117 12:24:33.284079 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:33.385027 kubelet[2663]: E0117 12:24:33.384931 2663 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:33.560680 kubelet[2663]: I0117 12:24:33.560613 2663 apiserver.go:52] "Watching apiserver" Jan 17 12:24:33.568876 kubelet[2663]: I0117 12:24:33.568782 2663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:24:33.613704 kubelet[2663]: W0117 12:24:33.613613 2663 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:34.066696 kubelet[2663]: W0117 12:24:34.066626 2663 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:34.298586 kubelet[2663]: W0117 12:24:34.298513 2663 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:35.043799 systemd[1]: Reloading requested from client PID 2985 ('systemctl') (unit session-11.scope)... Jan 17 12:24:35.043806 systemd[1]: Reloading... Jan 17 12:24:35.082076 zram_generator::config[3024]: No configuration found. Jan 17 12:24:35.155462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:24:35.222958 systemd[1]: Reloading finished in 178 ms. Jan 17 12:24:35.249236 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:35.253738 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:24:35.253838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:35.265511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:35.502564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:35.518572 (kubelet)[3088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:24:35.546493 kubelet[3088]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:24:35.546493 kubelet[3088]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:24:35.546493 kubelet[3088]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:24:35.546770 kubelet[3088]: I0117 12:24:35.546537 3088 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:24:35.550543 kubelet[3088]: I0117 12:24:35.550503 3088 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:24:35.550543 kubelet[3088]: I0117 12:24:35.550517 3088 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:24:35.550727 kubelet[3088]: I0117 12:24:35.550683 3088 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:24:35.551617 kubelet[3088]: I0117 12:24:35.551583 3088 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:24:35.553100 kubelet[3088]: I0117 12:24:35.553057 3088 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:24:35.555671 kubelet[3088]: E0117 12:24:35.555623 3088 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:24:35.555671 kubelet[3088]: I0117 12:24:35.555642 3088 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:24:35.564758 kubelet[3088]: I0117 12:24:35.564708 3088 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:24:35.564798 kubelet[3088]: I0117 12:24:35.564771 3088 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:24:35.564895 kubelet[3088]: I0117 12:24:35.564848 3088 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:24:35.564979 kubelet[3088]: I0117 12:24:35.564864 3088 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-4c6521d577","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:24:35.565062 kubelet[3088]: I0117 12:24:35.564982 3088 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:24:35.565062 kubelet[3088]: I0117 12:24:35.564989 3088 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:24:35.565062 kubelet[3088]: I0117 12:24:35.565017 3088 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:24:35.565129 kubelet[3088]: I0117 12:24:35.565083 3088 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:24:35.565129 kubelet[3088]: I0117 12:24:35.565092 3088 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:24:35.565129 kubelet[3088]: I0117 12:24:35.565111 3088 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:24:35.565129 kubelet[3088]: I0117 12:24:35.565120 3088 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:24:35.565518 kubelet[3088]: I0117 12:24:35.565482 3088 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:24:35.565819 kubelet[3088]: I0117 12:24:35.565779 3088 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:24:35.566098 kubelet[3088]: I0117 12:24:35.566046 3088 server.go:1269] "Started kubelet" Jan 17 12:24:35.566098 kubelet[3088]: I0117 12:24:35.566082 3088 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:24:35.566190 kubelet[3088]: I0117 12:24:35.566115 3088 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:24:35.566708 kubelet[3088]: I0117 12:24:35.566688 3088 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:24:35.567766 kubelet[3088]: I0117 12:24:35.567751 3088 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:24:35.567833 kubelet[3088]: I0117 12:24:35.567783 3088 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:24:35.567959 kubelet[3088]: E0117 12:24:35.567909 3088 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-4c6521d577\" not found" Jan 17 12:24:35.567959 kubelet[3088]: I0117 12:24:35.567949 3088 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:24:35.568075 kubelet[3088]: I0117 12:24:35.567971 3088 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:24:35.568145 kubelet[3088]: I0117 12:24:35.568130 3088 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:24:35.568246 kubelet[3088]: I0117 12:24:35.568210 3088 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:24:35.568310 kubelet[3088]: E0117 12:24:35.568253 3088 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:24:35.568310 kubelet[3088]: I0117 12:24:35.568267 3088 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:24:35.568310 kubelet[3088]: I0117 12:24:35.568277 3088 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:24:35.568981 kubelet[3088]: I0117 12:24:35.568967 3088 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:24:35.575114 kubelet[3088]: I0117 12:24:35.575088 3088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:24:35.575765 kubelet[3088]: I0117 12:24:35.575753 3088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:24:35.575765 kubelet[3088]: I0117 12:24:35.575767 3088 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:24:35.575838 kubelet[3088]: I0117 12:24:35.575779 3088 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:24:35.575838 kubelet[3088]: E0117 12:24:35.575805 3088 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:24:35.589491 kubelet[3088]: I0117 12:24:35.589452 3088 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:24:35.589491 kubelet[3088]: I0117 12:24:35.589464 3088 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:24:35.589491 kubelet[3088]: I0117 12:24:35.589479 3088 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:24:35.589628 kubelet[3088]: I0117 12:24:35.589584 3088 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:24:35.589628 kubelet[3088]: I0117 12:24:35.589592 3088 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:24:35.589628 kubelet[3088]: I0117 12:24:35.589606 3088 policy_none.go:49] "None policy: Start" Jan 17 12:24:35.589981 kubelet[3088]: I0117 12:24:35.589941 3088 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:24:35.589981 kubelet[3088]: I0117 12:24:35.589957 3088 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:24:35.590125 kubelet[3088]: I0117 12:24:35.590089 3088 state_mem.go:75] "Updated machine memory state" Jan 17 12:24:35.592604 kubelet[3088]: I0117 12:24:35.592562 3088 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:24:35.592724 kubelet[3088]: I0117 12:24:35.592687 3088 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:24:35.592724 kubelet[3088]: I0117 12:24:35.592699 3088 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:24:35.592794 kubelet[3088]: I0117 12:24:35.592783 3088 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:24:35.685061 kubelet[3088]: W0117 12:24:35.684970 3088 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:35.685381 kubelet[3088]: E0117 12:24:35.685138 3088 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-4c6521d577\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.685560 kubelet[3088]: W0117 12:24:35.685520 3088 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:35.685674 kubelet[3088]: W0117 12:24:35.685535 3088 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:35.685674 kubelet[3088]: E0117 12:24:35.685637 3088 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.685861 kubelet[3088]: E0117 12:24:35.685681 3088 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.700329 kubelet[3088]: I0117 12:24:35.700269 3088 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.709559 kubelet[3088]: I0117 12:24:35.709481 3088 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.709849 kubelet[3088]: I0117 12:24:35.709657 3088 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.769803 kubelet[3088]: I0117 12:24:35.769553 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.769803 kubelet[3088]: I0117 12:24:35.769663 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.769803 kubelet[3088]: I0117 12:24:35.769736 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.770267 kubelet[3088]: I0117 12:24:35.769808 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.770267 kubelet[3088]: I0117 12:24:35.769882 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47e0e243c647639e414cde91b2c5c937-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" (UID: \"47e0e243c647639e414cde91b2c5c937\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.770267 kubelet[3088]: I0117 12:24:35.769949 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47e0e243c647639e414cde91b2c5c937-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" (UID: \"47e0e243c647639e414cde91b2c5c937\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.770267 kubelet[3088]: I0117 12:24:35.770022 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17d07ed6284cbe5a6df2aa4dc2ee4537-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" (UID: \"17d07ed6284cbe5a6df2aa4dc2ee4537\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.770267 kubelet[3088]: I0117 12:24:35.770130 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc181a5fedaaebb4352103111a1a0068-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-4c6521d577\" (UID: \"fc181a5fedaaebb4352103111a1a0068\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:35.770799 kubelet[3088]: I0117 12:24:35.770209 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47e0e243c647639e414cde91b2c5c937-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" (UID: \"47e0e243c647639e414cde91b2c5c937\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:36.565215 kubelet[3088]: I0117 12:24:36.565201 3088 apiserver.go:52] "Watching apiserver" Jan 17 12:24:36.568587 kubelet[3088]: I0117 12:24:36.568574 3088 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:24:36.583908 kubelet[3088]: W0117 12:24:36.583894 3088 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:36.583981 kubelet[3088]: E0117 12:24:36.583934 3088 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-4c6521d577\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:36.584195 kubelet[3088]: W0117 12:24:36.584171 3088 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:36.584264 kubelet[3088]: W0117 12:24:36.584228 3088 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:24:36.584285 kubelet[3088]: E0117 12:24:36.584246 3088 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-4c6521d577\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:36.584285 kubelet[3088]: E0117 12:24:36.584268 3088 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-4c6521d577\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" Jan 17 12:24:36.591895 kubelet[3088]: I0117 12:24:36.591829 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-4c6521d577" podStartSLOduration=2.591819834 podStartE2EDuration="2.591819834s" podCreationTimestamp="2025-01-17 12:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:24:36.591758076 +0000 UTC m=+1.067139252" watchObservedRunningTime="2025-01-17 12:24:36.591819834 +0000 UTC m=+1.067201007" Jan 17 12:24:36.598881 kubelet[3088]: I0117 12:24:36.598855 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-4c6521d577" podStartSLOduration=2.598844489 podStartE2EDuration="2.598844489s" podCreationTimestamp="2025-01-17 12:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:24:36.595180506 +0000 UTC m=+1.070561682" watchObservedRunningTime="2025-01-17 12:24:36.598844489 +0000 UTC m=+1.074225665" Jan 17 12:24:36.598975 kubelet[3088]: I0117 12:24:36.598902 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-4c6521d577" podStartSLOduration=3.598896043 podStartE2EDuration="3.598896043s" podCreationTimestamp="2025-01-17 12:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:24:36.598896143 +0000 UTC m=+1.074277324" watchObservedRunningTime="2025-01-17 12:24:36.598896043 +0000 UTC m=+1.074277216" Jan 17 12:24:39.676397 sudo[2109]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:39.677323 sshd[2106]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:39.679200 systemd[1]: sshd@9-147.75.90.1:22-147.75.109.163:49284.service: Deactivated successfully. Jan 17 12:24:39.679985 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:24:39.680078 systemd[1]: session-11.scope: Consumed 3.153s CPU time, 167.7M memory peak, 0B memory swap peak. Jan 17 12:24:39.680348 systemd-logind[1809]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:24:39.680805 systemd-logind[1809]: Removed session 11. Jan 17 12:24:41.185822 kubelet[3088]: I0117 12:24:41.185759 3088 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:24:41.186792 containerd[1827]: time="2025-01-17T12:24:41.186542110Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:24:41.187477 kubelet[3088]: I0117 12:24:41.186985 3088 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:24:41.876152 systemd[1]: Created slice kubepods-besteffort-pod20f7308e_72b9_4a4e_aa44_86318d9645a4.slice - libcontainer container kubepods-besteffort-pod20f7308e_72b9_4a4e_aa44_86318d9645a4.slice. Jan 17 12:24:41.913622 kubelet[3088]: I0117 12:24:41.913547 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20f7308e-72b9-4a4e-aa44-86318d9645a4-kube-proxy\") pod \"kube-proxy-5lg2r\" (UID: \"20f7308e-72b9-4a4e-aa44-86318d9645a4\") " pod="kube-system/kube-proxy-5lg2r" Jan 17 12:24:41.913845 kubelet[3088]: I0117 12:24:41.913645 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hfn\" (UniqueName: \"kubernetes.io/projected/20f7308e-72b9-4a4e-aa44-86318d9645a4-kube-api-access-g8hfn\") pod \"kube-proxy-5lg2r\" (UID: \"20f7308e-72b9-4a4e-aa44-86318d9645a4\") " pod="kube-system/kube-proxy-5lg2r" Jan 17 12:24:41.913845 kubelet[3088]: I0117 12:24:41.913718 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20f7308e-72b9-4a4e-aa44-86318d9645a4-xtables-lock\") pod \"kube-proxy-5lg2r\" (UID: \"20f7308e-72b9-4a4e-aa44-86318d9645a4\") " pod="kube-system/kube-proxy-5lg2r" Jan 17 12:24:41.913845 kubelet[3088]: I0117 12:24:41.913772 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20f7308e-72b9-4a4e-aa44-86318d9645a4-lib-modules\") pod \"kube-proxy-5lg2r\" (UID: \"20f7308e-72b9-4a4e-aa44-86318d9645a4\") " pod="kube-system/kube-proxy-5lg2r" Jan 17 12:24:42.026888 kubelet[3088]: E0117 12:24:42.026784 3088 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:24:42.026888 kubelet[3088]: E0117 12:24:42.026845 3088 projected.go:194] Error preparing data for projected volume kube-api-access-g8hfn for pod kube-system/kube-proxy-5lg2r: configmap "kube-root-ca.crt" not found Jan 17 12:24:42.027354 kubelet[3088]: E0117 12:24:42.026976 3088 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20f7308e-72b9-4a4e-aa44-86318d9645a4-kube-api-access-g8hfn podName:20f7308e-72b9-4a4e-aa44-86318d9645a4 nodeName:}" failed. No retries permitted until 2025-01-17 12:24:42.526928828 +0000 UTC m=+7.002310072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g8hfn" (UniqueName: "kubernetes.io/projected/20f7308e-72b9-4a4e-aa44-86318d9645a4-kube-api-access-g8hfn") pod "kube-proxy-5lg2r" (UID: "20f7308e-72b9-4a4e-aa44-86318d9645a4") : configmap "kube-root-ca.crt" not found Jan 17 12:24:42.246061 systemd[1]: Created slice kubepods-besteffort-pod42a44989_2218_4bdf_8237_726fcb6f5ea8.slice - libcontainer container kubepods-besteffort-pod42a44989_2218_4bdf_8237_726fcb6f5ea8.slice. Jan 17 12:24:42.317385 kubelet[3088]: I0117 12:24:42.317194 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42a44989-2218-4bdf-8237-726fcb6f5ea8-var-lib-calico\") pod \"tigera-operator-76c4976dd7-c7cj9\" (UID: \"42a44989-2218-4bdf-8237-726fcb6f5ea8\") " pod="tigera-operator/tigera-operator-76c4976dd7-c7cj9" Jan 17 12:24:42.317385 kubelet[3088]: I0117 12:24:42.317366 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhlf\" (UniqueName: \"kubernetes.io/projected/42a44989-2218-4bdf-8237-726fcb6f5ea8-kube-api-access-6lhlf\") pod \"tigera-operator-76c4976dd7-c7cj9\" (UID: \"42a44989-2218-4bdf-8237-726fcb6f5ea8\") " pod="tigera-operator/tigera-operator-76c4976dd7-c7cj9" Jan 17 12:24:42.551805 containerd[1827]: time="2025-01-17T12:24:42.551565007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-c7cj9,Uid:42a44989-2218-4bdf-8237-726fcb6f5ea8,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:24:42.563497 containerd[1827]: time="2025-01-17T12:24:42.563463771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:42.563497 containerd[1827]: time="2025-01-17T12:24:42.563491243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:42.563497 containerd[1827]: time="2025-01-17T12:24:42.563498616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:42.563627 containerd[1827]: time="2025-01-17T12:24:42.563554511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:42.586456 systemd[1]: Started cri-containerd-57201f9fbf5e6de3c6d20ea5441e5930add1defbc3787e41129dff7dd26168c7.scope - libcontainer container 57201f9fbf5e6de3c6d20ea5441e5930add1defbc3787e41129dff7dd26168c7. Jan 17 12:24:42.662186 containerd[1827]: time="2025-01-17T12:24:42.662136139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-c7cj9,Uid:42a44989-2218-4bdf-8237-726fcb6f5ea8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"57201f9fbf5e6de3c6d20ea5441e5930add1defbc3787e41129dff7dd26168c7\"" Jan 17 12:24:42.663230 containerd[1827]: time="2025-01-17T12:24:42.663187968Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:24:42.800429 containerd[1827]: time="2025-01-17T12:24:42.800342696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lg2r,Uid:20f7308e-72b9-4a4e-aa44-86318d9645a4,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:42.810996 containerd[1827]: time="2025-01-17T12:24:42.810902700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:42.811230 containerd[1827]: time="2025-01-17T12:24:42.811190545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:42.811230 containerd[1827]: time="2025-01-17T12:24:42.811223185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:42.811314 containerd[1827]: time="2025-01-17T12:24:42.811274003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:42.831219 systemd[1]: Started cri-containerd-45967ee19c7fe69c2d7010789a756d98218889f7b418c00454a7395f4c846398.scope - libcontainer container 45967ee19c7fe69c2d7010789a756d98218889f7b418c00454a7395f4c846398. Jan 17 12:24:42.843500 containerd[1827]: time="2025-01-17T12:24:42.843444264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lg2r,Uid:20f7308e-72b9-4a4e-aa44-86318d9645a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"45967ee19c7fe69c2d7010789a756d98218889f7b418c00454a7395f4c846398\"" Jan 17 12:24:42.845036 containerd[1827]: time="2025-01-17T12:24:42.845015093Z" level=info msg="CreateContainer within sandbox \"45967ee19c7fe69c2d7010789a756d98218889f7b418c00454a7395f4c846398\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:24:42.851820 containerd[1827]: time="2025-01-17T12:24:42.851775592Z" level=info msg="CreateContainer within sandbox \"45967ee19c7fe69c2d7010789a756d98218889f7b418c00454a7395f4c846398\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5acf2407e49bce04fc709b4c7320dee8fe7e3d2a45fa284282b31d274a46f33a\"" Jan 17 12:24:42.852087 containerd[1827]: time="2025-01-17T12:24:42.852028008Z" level=info msg="StartContainer for \"5acf2407e49bce04fc709b4c7320dee8fe7e3d2a45fa284282b31d274a46f33a\"" Jan 17 12:24:42.877287 systemd[1]: Started cri-containerd-5acf2407e49bce04fc709b4c7320dee8fe7e3d2a45fa284282b31d274a46f33a.scope - libcontainer container 5acf2407e49bce04fc709b4c7320dee8fe7e3d2a45fa284282b31d274a46f33a. Jan 17 12:24:42.893106 containerd[1827]: time="2025-01-17T12:24:42.893077604Z" level=info msg="StartContainer for \"5acf2407e49bce04fc709b4c7320dee8fe7e3d2a45fa284282b31d274a46f33a\" returns successfully" Jan 17 12:24:43.634982 kubelet[3088]: I0117 12:24:43.634872 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5lg2r" podStartSLOduration=2.634835516 podStartE2EDuration="2.634835516s" podCreationTimestamp="2025-01-17 12:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:24:43.634458023 +0000 UTC m=+8.109839268" watchObservedRunningTime="2025-01-17 12:24:43.634835516 +0000 UTC m=+8.110216761" Jan 17 12:24:47.184512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173538723.mount: Deactivated successfully. Jan 17 12:24:47.390216 containerd[1827]: time="2025-01-17T12:24:47.390160312Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:47.390413 containerd[1827]: time="2025-01-17T12:24:47.390385051Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764305" Jan 17 12:24:47.390761 containerd[1827]: time="2025-01-17T12:24:47.390720536Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:47.391753 containerd[1827]: time="2025-01-17T12:24:47.391712628Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:47.392242 containerd[1827]: time="2025-01-17T12:24:47.392198909Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.728991463s" Jan 17 12:24:47.392242 containerd[1827]: time="2025-01-17T12:24:47.392214949Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:24:47.393190 containerd[1827]: time="2025-01-17T12:24:47.393149896Z" level=info msg="CreateContainer within sandbox \"57201f9fbf5e6de3c6d20ea5441e5930add1defbc3787e41129dff7dd26168c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:24:47.397070 containerd[1827]: time="2025-01-17T12:24:47.397007356Z" level=info msg="CreateContainer within sandbox \"57201f9fbf5e6de3c6d20ea5441e5930add1defbc3787e41129dff7dd26168c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4fff0912be0895d846fbd94c263faafd82139dab930a2a47788b83d11c5a12be\"" Jan 17 12:24:47.397237 containerd[1827]: time="2025-01-17T12:24:47.397192602Z" level=info msg="StartContainer for \"4fff0912be0895d846fbd94c263faafd82139dab930a2a47788b83d11c5a12be\"" Jan 17 12:24:47.425299 systemd[1]: Started cri-containerd-4fff0912be0895d846fbd94c263faafd82139dab930a2a47788b83d11c5a12be.scope - libcontainer container 4fff0912be0895d846fbd94c263faafd82139dab930a2a47788b83d11c5a12be. Jan 17 12:24:47.436679 containerd[1827]: time="2025-01-17T12:24:47.436617514Z" level=info msg="StartContainer for \"4fff0912be0895d846fbd94c263faafd82139dab930a2a47788b83d11c5a12be\" returns successfully" Jan 17 12:24:47.938413 kubelet[3088]: I0117 12:24:47.938380 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-c7cj9" podStartSLOduration=1.208618189 podStartE2EDuration="5.938368681s" podCreationTimestamp="2025-01-17 12:24:42 +0000 UTC" firstStartedPulling="2025-01-17 12:24:42.662887608 +0000 UTC m=+7.138268789" lastFinishedPulling="2025-01-17 12:24:47.392638107 +0000 UTC m=+11.868019281" observedRunningTime="2025-01-17 12:24:47.619508013 +0000 UTC m=+12.094889203" watchObservedRunningTime="2025-01-17 12:24:47.938368681 +0000 UTC m=+12.413749856" Jan 17 12:24:48.543688 update_engine[1814]: I20250117 12:24:48.543581 1814 update_attempter.cc:509] Updating boot flags... Jan 17 12:24:48.582016 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (3573) Jan 17 12:24:48.608064 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (3577) Jan 17 12:24:50.417484 systemd[1]: Created slice kubepods-besteffort-poda449e018_a73b_424c_b3fd_b26eee217f9b.slice - libcontainer container kubepods-besteffort-poda449e018_a73b_424c_b3fd_b26eee217f9b.slice. Jan 17 12:24:50.426360 systemd[1]: Created slice kubepods-besteffort-pod20b35924_de24_4cec_b1d2_f5717f2be163.slice - libcontainer container kubepods-besteffort-pod20b35924_de24_4cec_b1d2_f5717f2be163.slice. Jan 17 12:24:50.449413 kubelet[3088]: E0117 12:24:50.449376 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqjfk" podUID="1b704db9-0e4b-4e28-94e0-d73625f21ba2" Jan 17 12:24:50.474998 kubelet[3088]: I0117 12:24:50.474977 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-var-run-calico\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.474998 kubelet[3088]: I0117 12:24:50.475000 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-cni-net-dir\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475119 kubelet[3088]: I0117 12:24:50.475017 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4lj\" (UniqueName: \"kubernetes.io/projected/20b35924-de24-4cec-b1d2-f5717f2be163-kube-api-access-dh4lj\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475119 kubelet[3088]: I0117 12:24:50.475033 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg8tj\" (UniqueName: \"kubernetes.io/projected/a449e018-a73b-424c-b3fd-b26eee217f9b-kube-api-access-lg8tj\") pod \"calico-typha-bf9fc8c7-rx7nx\" (UID: \"a449e018-a73b-424c-b3fd-b26eee217f9b\") " pod="calico-system/calico-typha-bf9fc8c7-rx7nx" Jan 17 12:24:50.475119 kubelet[3088]: I0117 12:24:50.475042 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-lib-modules\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475119 kubelet[3088]: I0117 12:24:50.475050 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-var-lib-calico\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475119 kubelet[3088]: I0117 12:24:50.475060 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-cni-bin-dir\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475220 kubelet[3088]: I0117 12:24:50.475069 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a449e018-a73b-424c-b3fd-b26eee217f9b-tigera-ca-bundle\") pod \"calico-typha-bf9fc8c7-rx7nx\" (UID: \"a449e018-a73b-424c-b3fd-b26eee217f9b\") " pod="calico-system/calico-typha-bf9fc8c7-rx7nx" Jan 17 12:24:50.475220 kubelet[3088]: I0117 12:24:50.475080 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-flexvol-driver-host\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475220 kubelet[3088]: I0117 12:24:50.475097 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b704db9-0e4b-4e28-94e0-d73625f21ba2-kubelet-dir\") pod \"csi-node-driver-rqjfk\" (UID: \"1b704db9-0e4b-4e28-94e0-d73625f21ba2\") " pod="calico-system/csi-node-driver-rqjfk" Jan 17 12:24:50.475220 kubelet[3088]: I0117 12:24:50.475106 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1b704db9-0e4b-4e28-94e0-d73625f21ba2-socket-dir\") pod \"csi-node-driver-rqjfk\" (UID: \"1b704db9-0e4b-4e28-94e0-d73625f21ba2\") " pod="calico-system/csi-node-driver-rqjfk" Jan 17 12:24:50.475220 kubelet[3088]: I0117 12:24:50.475116 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1b704db9-0e4b-4e28-94e0-d73625f21ba2-registration-dir\") pod \"csi-node-driver-rqjfk\" (UID: \"1b704db9-0e4b-4e28-94e0-d73625f21ba2\") " pod="calico-system/csi-node-driver-rqjfk" Jan 17 12:24:50.475306 kubelet[3088]: I0117 12:24:50.475127 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-xtables-lock\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475306 kubelet[3088]: I0117 12:24:50.475135 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20b35924-de24-4cec-b1d2-f5717f2be163-tigera-ca-bundle\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475306 kubelet[3088]: I0117 12:24:50.475144 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1b704db9-0e4b-4e28-94e0-d73625f21ba2-varrun\") pod \"csi-node-driver-rqjfk\" (UID: \"1b704db9-0e4b-4e28-94e0-d73625f21ba2\") " pod="calico-system/csi-node-driver-rqjfk" Jan 17 12:24:50.475306 kubelet[3088]: I0117 12:24:50.475158 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-policysync\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475306 kubelet[3088]: I0117 12:24:50.475182 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/20b35924-de24-4cec-b1d2-f5717f2be163-node-certs\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475386 kubelet[3088]: I0117 12:24:50.475210 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a449e018-a73b-424c-b3fd-b26eee217f9b-typha-certs\") pod \"calico-typha-bf9fc8c7-rx7nx\" (UID: \"a449e018-a73b-424c-b3fd-b26eee217f9b\") " pod="calico-system/calico-typha-bf9fc8c7-rx7nx" Jan 17 12:24:50.475386 kubelet[3088]: I0117 12:24:50.475247 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/20b35924-de24-4cec-b1d2-f5717f2be163-cni-log-dir\") pod \"calico-node-8hvpl\" (UID: \"20b35924-de24-4cec-b1d2-f5717f2be163\") " pod="calico-system/calico-node-8hvpl" Jan 17 12:24:50.475386 kubelet[3088]: I0117 12:24:50.475265 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d75dv\" (UniqueName: \"kubernetes.io/projected/1b704db9-0e4b-4e28-94e0-d73625f21ba2-kube-api-access-d75dv\") pod \"csi-node-driver-rqjfk\" (UID: \"1b704db9-0e4b-4e28-94e0-d73625f21ba2\") " pod="calico-system/csi-node-driver-rqjfk" Jan 17 12:24:50.580212 kubelet[3088]: E0117 12:24:50.580069 3088 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:24:50.580212 kubelet[3088]: W0117 12:24:50.580170 3088 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:24:50.580627 kubelet[3088]: E0117 12:24:50.580291 3088 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:24:50.580904 kubelet[3088]: E0117 12:24:50.580868 3088 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:24:50.581065 kubelet[3088]: W0117 12:24:50.580901 3088 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:24:50.581065 kubelet[3088]: E0117 12:24:50.580949 3088 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:24:50.584316 kubelet[3088]: E0117 12:24:50.584264 3088 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:24:50.584316 kubelet[3088]: W0117 12:24:50.584307 3088 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:24:50.584775 kubelet[3088]: E0117 12:24:50.584359 3088 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:24:50.584884 kubelet[3088]: E0117 12:24:50.584812 3088 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:24:50.584884 kubelet[3088]: W0117 12:24:50.584849 3088 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:24:50.585091 kubelet[3088]: E0117 12:24:50.584882 3088 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:24:50.596591 kubelet[3088]: E0117 12:24:50.596529 3088 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:24:50.596591 kubelet[3088]: W0117 12:24:50.596575 3088 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:24:50.597091 kubelet[3088]: E0117 12:24:50.596631 3088 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:24:50.597681 kubelet[3088]: E0117 12:24:50.597599 3088 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:24:50.597681 kubelet[3088]: W0117 12:24:50.597645 3088 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:24:50.597681 kubelet[3088]: E0117 12:24:50.597680 3088 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:24:50.599437 kubelet[3088]: E0117 12:24:50.599358 3088 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:24:50.599437 kubelet[3088]: W0117 12:24:50.599395 3088 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:24:50.599437 kubelet[3088]: E0117 12:24:50.599430 3088 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:24:50.724048 containerd[1827]: time="2025-01-17T12:24:50.723767780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf9fc8c7-rx7nx,Uid:a449e018-a73b-424c-b3fd-b26eee217f9b,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:50.729448 containerd[1827]: time="2025-01-17T12:24:50.729401757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hvpl,Uid:20b35924-de24-4cec-b1d2-f5717f2be163,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:50.735464 containerd[1827]: time="2025-01-17T12:24:50.735383745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:50.735687 containerd[1827]: time="2025-01-17T12:24:50.735437071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:50.735687 containerd[1827]: time="2025-01-17T12:24:50.735656448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:50.735740 containerd[1827]: time="2025-01-17T12:24:50.735699889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:50.748130 containerd[1827]: time="2025-01-17T12:24:50.748088399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:50.748130 containerd[1827]: time="2025-01-17T12:24:50.748119428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:50.748130 containerd[1827]: time="2025-01-17T12:24:50.748126626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:50.748262 containerd[1827]: time="2025-01-17T12:24:50.748168603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:50.750141 systemd[1]: Started cri-containerd-c76d4fad5170dc22fea4fe57f40a0ca94b412ef21ed5cdcbd647b5eabcef5946.scope - libcontainer container c76d4fad5170dc22fea4fe57f40a0ca94b412ef21ed5cdcbd647b5eabcef5946. Jan 17 12:24:50.754755 systemd[1]: Started cri-containerd-1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737.scope - libcontainer container 1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737. Jan 17 12:24:50.765713 containerd[1827]: time="2025-01-17T12:24:50.765685441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hvpl,Uid:20b35924-de24-4cec-b1d2-f5717f2be163,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737\"" Jan 17 12:24:50.766569 containerd[1827]: time="2025-01-17T12:24:50.766553832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:24:50.777081 containerd[1827]: time="2025-01-17T12:24:50.777050138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf9fc8c7-rx7nx,Uid:a449e018-a73b-424c-b3fd-b26eee217f9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c76d4fad5170dc22fea4fe57f40a0ca94b412ef21ed5cdcbd647b5eabcef5946\"" Jan 17 12:24:51.985473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186337858.mount: Deactivated successfully. Jan 17 12:24:52.027357 containerd[1827]: time="2025-01-17T12:24:52.027331647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:52.027601 containerd[1827]: time="2025-01-17T12:24:52.027530995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 17 12:24:52.027965 containerd[1827]: time="2025-01-17T12:24:52.027949183Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:52.028880 containerd[1827]: time="2025-01-17T12:24:52.028867168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:52.029371 containerd[1827]: time="2025-01-17T12:24:52.029329486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.262756795s" Jan 17 12:24:52.029371 containerd[1827]: time="2025-01-17T12:24:52.029344579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:24:52.029826 containerd[1827]: time="2025-01-17T12:24:52.029815966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:24:52.030512 containerd[1827]: time="2025-01-17T12:24:52.030498975Z" level=info msg="CreateContainer within sandbox \"1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:24:52.036685 containerd[1827]: time="2025-01-17T12:24:52.036642380Z" level=info msg="CreateContainer within sandbox \"1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e\"" Jan 17 12:24:52.036904 containerd[1827]: time="2025-01-17T12:24:52.036893342Z" level=info msg="StartContainer for \"a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e\"" Jan 17 12:24:52.061395 systemd[1]: Started cri-containerd-a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e.scope - libcontainer container a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e. Jan 17 12:24:52.074818 containerd[1827]: time="2025-01-17T12:24:52.074789933Z" level=info msg="StartContainer for \"a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e\" returns successfully" Jan 17 12:24:52.081593 systemd[1]: cri-containerd-a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e.scope: Deactivated successfully. Jan 17 12:24:52.327147 containerd[1827]: time="2025-01-17T12:24:52.327042928Z" level=info msg="shim disconnected" id=a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e namespace=k8s.io Jan 17 12:24:52.327147 containerd[1827]: time="2025-01-17T12:24:52.327075833Z" level=warning msg="cleaning up after shim disconnected" id=a84de9b29b3c220a4e4fc9309d848c20a146af84a6ad10d5a989cfe44a69077e namespace=k8s.io Jan 17 12:24:52.327147 containerd[1827]: time="2025-01-17T12:24:52.327082188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:52.577032 kubelet[3088]: E0117 12:24:52.576914 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqjfk" podUID="1b704db9-0e4b-4e28-94e0-d73625f21ba2" Jan 17 12:24:53.525260 containerd[1827]: time="2025-01-17T12:24:53.525204090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:53.525473 containerd[1827]: time="2025-01-17T12:24:53.525430739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 17 12:24:53.525758 containerd[1827]: time="2025-01-17T12:24:53.525710506Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:53.526699 containerd[1827]: time="2025-01-17T12:24:53.526658096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:53.527140 containerd[1827]: time="2025-01-17T12:24:53.527098142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.49726934s" Jan 17 12:24:53.527140 containerd[1827]: time="2025-01-17T12:24:53.527115805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:24:53.527686 containerd[1827]: time="2025-01-17T12:24:53.527646498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:24:53.530537 containerd[1827]: time="2025-01-17T12:24:53.530522352Z" level=info msg="CreateContainer within sandbox \"c76d4fad5170dc22fea4fe57f40a0ca94b412ef21ed5cdcbd647b5eabcef5946\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:24:53.535981 containerd[1827]: time="2025-01-17T12:24:53.535939831Z" level=info msg="CreateContainer within sandbox \"c76d4fad5170dc22fea4fe57f40a0ca94b412ef21ed5cdcbd647b5eabcef5946\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"38e9091ae6fa6b0b69775cbc3fcaab91fa93799ae96641f949e9e1a3b0948427\"" Jan 17 12:24:53.536170 containerd[1827]: time="2025-01-17T12:24:53.536146738Z" level=info msg="StartContainer for \"38e9091ae6fa6b0b69775cbc3fcaab91fa93799ae96641f949e9e1a3b0948427\"" Jan 17 12:24:53.570291 systemd[1]: Started cri-containerd-38e9091ae6fa6b0b69775cbc3fcaab91fa93799ae96641f949e9e1a3b0948427.scope - libcontainer container 38e9091ae6fa6b0b69775cbc3fcaab91fa93799ae96641f949e9e1a3b0948427. Jan 17 12:24:53.610588 containerd[1827]: time="2025-01-17T12:24:53.610557519Z" level=info msg="StartContainer for \"38e9091ae6fa6b0b69775cbc3fcaab91fa93799ae96641f949e9e1a3b0948427\" returns successfully" Jan 17 12:24:53.652758 kubelet[3088]: I0117 12:24:53.652605 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bf9fc8c7-rx7nx" podStartSLOduration=0.902624148 podStartE2EDuration="3.652557316s" podCreationTimestamp="2025-01-17 12:24:50 +0000 UTC" firstStartedPulling="2025-01-17 12:24:50.777640258 +0000 UTC m=+15.253021437" lastFinishedPulling="2025-01-17 12:24:53.52757343 +0000 UTC m=+18.002954605" observedRunningTime="2025-01-17 12:24:53.652104168 +0000 UTC m=+18.127485447" watchObservedRunningTime="2025-01-17 12:24:53.652557316 +0000 UTC m=+18.127938593" Jan 17 12:24:54.576147 kubelet[3088]: E0117 12:24:54.576088 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqjfk" podUID="1b704db9-0e4b-4e28-94e0-d73625f21ba2" Jan 17 12:24:54.635959 kubelet[3088]: I0117 12:24:54.635904 3088 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:24:56.155938 containerd[1827]: time="2025-01-17T12:24:56.155883004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:56.156146 containerd[1827]: time="2025-01-17T12:24:56.156117315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:24:56.156476 containerd[1827]: time="2025-01-17T12:24:56.156430615Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:56.157457 containerd[1827]: time="2025-01-17T12:24:56.157415090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:56.158213 containerd[1827]: time="2025-01-17T12:24:56.158170781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.630508359s" Jan 17 12:24:56.158213 containerd[1827]: time="2025-01-17T12:24:56.158188184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:24:56.159066 containerd[1827]: time="2025-01-17T12:24:56.159051692Z" level=info msg="CreateContainer within sandbox \"1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:24:56.164724 containerd[1827]: time="2025-01-17T12:24:56.164682125Z" level=info msg="CreateContainer within sandbox \"1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04\"" Jan 17 12:24:56.164929 containerd[1827]: time="2025-01-17T12:24:56.164918234Z" level=info msg="StartContainer for \"ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04\"" Jan 17 12:24:56.187380 systemd[1]: Started cri-containerd-ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04.scope - libcontainer container ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04. Jan 17 12:24:56.203765 containerd[1827]: time="2025-01-17T12:24:56.203738610Z" level=info msg="StartContainer for \"ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04\" returns successfully" Jan 17 12:24:56.577273 kubelet[3088]: E0117 12:24:56.576972 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rqjfk" podUID="1b704db9-0e4b-4e28-94e0-d73625f21ba2" Jan 17 12:24:56.729362 systemd[1]: cri-containerd-ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04.scope: Deactivated successfully. Jan 17 12:24:56.738521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04-rootfs.mount: Deactivated successfully. Jan 17 12:24:56.757884 kubelet[3088]: I0117 12:24:56.757865 3088 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:24:56.773282 systemd[1]: Created slice kubepods-besteffort-podfb8a82b8_47f1_4e69_9d26_f80868988f6e.slice - libcontainer container kubepods-besteffort-podfb8a82b8_47f1_4e69_9d26_f80868988f6e.slice. Jan 17 12:24:56.777214 systemd[1]: Created slice kubepods-besteffort-podbecb30b0_7deb_4ba2_a030_4d72b985fd42.slice - libcontainer container kubepods-besteffort-podbecb30b0_7deb_4ba2_a030_4d72b985fd42.slice. Jan 17 12:24:56.781420 systemd[1]: Created slice kubepods-burstable-podd83baa3e_9ec1_4a20_b6b3_e4cd2fa744b7.slice - libcontainer container kubepods-burstable-podd83baa3e_9ec1_4a20_b6b3_e4cd2fa744b7.slice. Jan 17 12:24:56.784536 systemd[1]: Created slice kubepods-besteffort-podfb43c142_a013_4f25_834a_935fd8e973a8.slice - libcontainer container kubepods-besteffort-podfb43c142_a013_4f25_834a_935fd8e973a8.slice. Jan 17 12:24:56.787883 systemd[1]: Created slice kubepods-burstable-pod9f91e660_ed9d_4c6c_8756_9789d37e6a0c.slice - libcontainer container kubepods-burstable-pod9f91e660_ed9d_4c6c_8756_9789d37e6a0c.slice. Jan 17 12:24:56.821746 kubelet[3088]: I0117 12:24:56.821692 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb43c142-a013-4f25-834a-935fd8e973a8-calico-apiserver-certs\") pod \"calico-apiserver-7bc754cd95-fjcl8\" (UID: \"fb43c142-a013-4f25-834a-935fd8e973a8\") " pod="calico-apiserver/calico-apiserver-7bc754cd95-fjcl8" Jan 17 12:24:56.821746 kubelet[3088]: I0117 12:24:56.821728 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f91e660-ed9d-4c6c-8756-9789d37e6a0c-config-volume\") pod \"coredns-6f6b679f8f-b6qr5\" (UID: \"9f91e660-ed9d-4c6c-8756-9789d37e6a0c\") " pod="kube-system/coredns-6f6b679f8f-b6qr5" Jan 17 12:24:56.821746 kubelet[3088]: I0117 12:24:56.821747 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8sc\" (UniqueName: \"kubernetes.io/projected/fb8a82b8-47f1-4e69-9d26-f80868988f6e-kube-api-access-qp8sc\") pod \"calico-kube-controllers-6dd9787d-lhszl\" (UID: \"fb8a82b8-47f1-4e69-9d26-f80868988f6e\") " pod="calico-system/calico-kube-controllers-6dd9787d-lhszl" Jan 17 12:24:56.821914 kubelet[3088]: I0117 12:24:56.821764 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjrp\" (UniqueName: \"kubernetes.io/projected/becb30b0-7deb-4ba2-a030-4d72b985fd42-kube-api-access-htjrp\") pod \"calico-apiserver-7bc754cd95-7szqj\" (UID: \"becb30b0-7deb-4ba2-a030-4d72b985fd42\") " pod="calico-apiserver/calico-apiserver-7bc754cd95-7szqj" Jan 17 12:24:56.821914 kubelet[3088]: I0117 12:24:56.821783 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xth7g\" (UniqueName: \"kubernetes.io/projected/d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7-kube-api-access-xth7g\") pod \"coredns-6f6b679f8f-qbhrr\" (UID: \"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7\") " pod="kube-system/coredns-6f6b679f8f-qbhrr" Jan 17 12:24:56.821914 kubelet[3088]: I0117 12:24:56.821874 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/becb30b0-7deb-4ba2-a030-4d72b985fd42-calico-apiserver-certs\") pod \"calico-apiserver-7bc754cd95-7szqj\" (UID: \"becb30b0-7deb-4ba2-a030-4d72b985fd42\") " pod="calico-apiserver/calico-apiserver-7bc754cd95-7szqj" Jan 17 12:24:56.821914 kubelet[3088]: I0117 12:24:56.821905 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqv4\" (UniqueName: \"kubernetes.io/projected/9f91e660-ed9d-4c6c-8756-9789d37e6a0c-kube-api-access-rqqv4\") pod \"coredns-6f6b679f8f-b6qr5\" (UID: \"9f91e660-ed9d-4c6c-8756-9789d37e6a0c\") " pod="kube-system/coredns-6f6b679f8f-b6qr5" Jan 17 12:24:56.822078 kubelet[3088]: I0117 12:24:56.821926 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7-config-volume\") pod \"coredns-6f6b679f8f-qbhrr\" (UID: \"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7\") " pod="kube-system/coredns-6f6b679f8f-qbhrr" Jan 17 12:24:56.822078 kubelet[3088]: I0117 12:24:56.821952 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb8a82b8-47f1-4e69-9d26-f80868988f6e-tigera-ca-bundle\") pod \"calico-kube-controllers-6dd9787d-lhszl\" (UID: \"fb8a82b8-47f1-4e69-9d26-f80868988f6e\") " pod="calico-system/calico-kube-controllers-6dd9787d-lhszl" Jan 17 12:24:56.822078 kubelet[3088]: I0117 12:24:56.821972 3088 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjc6\" (UniqueName: \"kubernetes.io/projected/fb43c142-a013-4f25-834a-935fd8e973a8-kube-api-access-hkjc6\") pod \"calico-apiserver-7bc754cd95-fjcl8\" (UID: \"fb43c142-a013-4f25-834a-935fd8e973a8\") " pod="calico-apiserver/calico-apiserver-7bc754cd95-fjcl8" Jan 17 12:24:57.076870 containerd[1827]: time="2025-01-17T12:24:57.076731283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd9787d-lhszl,Uid:fb8a82b8-47f1-4e69-9d26-f80868988f6e,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:57.080038 containerd[1827]: time="2025-01-17T12:24:57.079909028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-7szqj,Uid:becb30b0-7deb-4ba2-a030-4d72b985fd42,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:24:57.084279 containerd[1827]: time="2025-01-17T12:24:57.084173827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qbhrr,Uid:d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:57.087487 containerd[1827]: time="2025-01-17T12:24:57.087374109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-fjcl8,Uid:fb43c142-a013-4f25-834a-935fd8e973a8,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:24:57.090747 containerd[1827]: time="2025-01-17T12:24:57.090640542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b6qr5,Uid:9f91e660-ed9d-4c6c-8756-9789d37e6a0c,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:57.399344 containerd[1827]: time="2025-01-17T12:24:57.399309031Z" level=info msg="shim disconnected" id=ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04 namespace=k8s.io Jan 17 12:24:57.399344 containerd[1827]: time="2025-01-17T12:24:57.399342293Z" level=warning msg="cleaning up after shim disconnected" id=ba31a20e5f386bff44932671c944b323129481bb2bbdf236d002607a8f016d04 namespace=k8s.io Jan 17 12:24:57.399563 containerd[1827]: time="2025-01-17T12:24:57.399348492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:57.427091 containerd[1827]: time="2025-01-17T12:24:57.427060911Z" level=error msg="Failed to destroy network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427259 containerd[1827]: time="2025-01-17T12:24:57.427074563Z" level=error msg="Failed to destroy network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427259 containerd[1827]: time="2025-01-17T12:24:57.427078714Z" level=error msg="Failed to destroy network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427433 containerd[1827]: time="2025-01-17T12:24:57.427416438Z" level=error msg="encountered an error cleaning up failed sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427474 containerd[1827]: time="2025-01-17T12:24:57.427432403Z" level=error msg="encountered an error cleaning up failed sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427474 containerd[1827]: time="2025-01-17T12:24:57.427444481Z" level=error msg="encountered an error cleaning up failed sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427541 containerd[1827]: time="2025-01-17T12:24:57.427466246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b6qr5,Uid:9f91e660-ed9d-4c6c-8756-9789d37e6a0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427541 containerd[1827]: time="2025-01-17T12:24:57.427483326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-fjcl8,Uid:fb43c142-a013-4f25-834a-935fd8e973a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427541 containerd[1827]: time="2025-01-17T12:24:57.427453219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd9787d-lhszl,Uid:fb8a82b8-47f1-4e69-9d26-f80868988f6e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427657 kubelet[3088]: E0117 12:24:57.427636 3088 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427694 kubelet[3088]: E0117 12:24:57.427664 3088 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427694 kubelet[3088]: E0117 12:24:57.427679 3088 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dd9787d-lhszl" Jan 17 12:24:57.427694 kubelet[3088]: E0117 12:24:57.427688 3088 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b6qr5" Jan 17 12:24:57.427752 kubelet[3088]: E0117 12:24:57.427692 3088 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dd9787d-lhszl" Jan 17 12:24:57.427752 kubelet[3088]: E0117 12:24:57.427699 3088 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b6qr5" Jan 17 12:24:57.427752 kubelet[3088]: E0117 12:24:57.427717 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dd9787d-lhszl_calico-system(fb8a82b8-47f1-4e69-9d26-f80868988f6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dd9787d-lhszl_calico-system(fb8a82b8-47f1-4e69-9d26-f80868988f6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dd9787d-lhszl" podUID="fb8a82b8-47f1-4e69-9d26-f80868988f6e" Jan 17 12:24:57.427818 kubelet[3088]: E0117 12:24:57.427717 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b6qr5_kube-system(9f91e660-ed9d-4c6c-8756-9789d37e6a0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b6qr5_kube-system(9f91e660-ed9d-4c6c-8756-9789d37e6a0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b6qr5" podUID="9f91e660-ed9d-4c6c-8756-9789d37e6a0c" Jan 17 12:24:57.427818 kubelet[3088]: E0117 12:24:57.427672 3088 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.427818 kubelet[3088]: E0117 12:24:57.427742 3088 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc754cd95-fjcl8" Jan 17 12:24:57.427893 kubelet[3088]: E0117 12:24:57.427751 3088 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc754cd95-fjcl8" Jan 17 12:24:57.427893 kubelet[3088]: E0117 12:24:57.427765 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc754cd95-fjcl8_calico-apiserver(fb43c142-a013-4f25-834a-935fd8e973a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc754cd95-fjcl8_calico-apiserver(fb43c142-a013-4f25-834a-935fd8e973a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc754cd95-fjcl8" podUID="fb43c142-a013-4f25-834a-935fd8e973a8" Jan 17 12:24:57.428395 containerd[1827]: time="2025-01-17T12:24:57.428380724Z" level=error msg="Failed to destroy network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.428611 containerd[1827]: time="2025-01-17T12:24:57.428599001Z" level=error msg="encountered an error cleaning up failed sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.428634 containerd[1827]: time="2025-01-17T12:24:57.428622211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qbhrr,Uid:d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.428723 kubelet[3088]: E0117 12:24:57.428705 3088 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.428757 kubelet[3088]: E0117 12:24:57.428733 3088 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qbhrr" Jan 17 12:24:57.428757 kubelet[3088]: E0117 12:24:57.428748 3088 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qbhrr" Jan 17 12:24:57.428835 kubelet[3088]: E0117 12:24:57.428774 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qbhrr_kube-system(d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qbhrr_kube-system(d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qbhrr" podUID="d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7" Jan 17 12:24:57.428893 containerd[1827]: time="2025-01-17T12:24:57.428758618Z" level=error msg="Failed to destroy network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.428945 containerd[1827]: time="2025-01-17T12:24:57.428930972Z" level=error msg="encountered an error cleaning up failed sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.428981 containerd[1827]: time="2025-01-17T12:24:57.428956617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-7szqj,Uid:becb30b0-7deb-4ba2-a030-4d72b985fd42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.429039 kubelet[3088]: E0117 12:24:57.429028 3088 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.429081 kubelet[3088]: E0117 12:24:57.429062 3088 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc754cd95-7szqj" Jan 17 12:24:57.429081 kubelet[3088]: E0117 12:24:57.429073 3088 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc754cd95-7szqj" Jan 17 12:24:57.429140 kubelet[3088]: E0117 12:24:57.429089 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc754cd95-7szqj_calico-apiserver(becb30b0-7deb-4ba2-a030-4d72b985fd42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc754cd95-7szqj_calico-apiserver(becb30b0-7deb-4ba2-a030-4d72b985fd42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc754cd95-7szqj" podUID="becb30b0-7deb-4ba2-a030-4d72b985fd42" Jan 17 12:24:57.648767 containerd[1827]: time="2025-01-17T12:24:57.648674264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:24:57.649130 kubelet[3088]: I0117 12:24:57.648845 3088 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:24:57.649894 containerd[1827]: time="2025-01-17T12:24:57.649844574Z" level=info msg="StopPodSandbox for \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\"" Jan 17 12:24:57.649972 containerd[1827]: time="2025-01-17T12:24:57.649958466Z" level=info msg="Ensure that sandbox f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b in task-service has been cleanup successfully" Jan 17 12:24:57.650101 kubelet[3088]: I0117 12:24:57.650090 3088 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:24:57.650360 containerd[1827]: time="2025-01-17T12:24:57.650345704Z" level=info msg="StopPodSandbox for \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\"" Jan 17 12:24:57.650464 containerd[1827]: time="2025-01-17T12:24:57.650452433Z" level=info msg="Ensure that sandbox ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae in task-service has been cleanup successfully" Jan 17 12:24:57.650526 kubelet[3088]: I0117 12:24:57.650516 3088 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:24:57.650781 containerd[1827]: time="2025-01-17T12:24:57.650767598Z" level=info msg="StopPodSandbox for \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\"" Jan 17 12:24:57.650893 containerd[1827]: time="2025-01-17T12:24:57.650879704Z" level=info msg="Ensure that sandbox 41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730 in task-service has been cleanup successfully" Jan 17 12:24:57.651034 kubelet[3088]: I0117 12:24:57.651025 3088 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:24:57.651296 containerd[1827]: time="2025-01-17T12:24:57.651278099Z" level=info msg="StopPodSandbox for \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\"" Jan 17 12:24:57.651449 containerd[1827]: time="2025-01-17T12:24:57.651436858Z" level=info msg="Ensure that sandbox a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984 in task-service has been cleanup successfully" Jan 17 12:24:57.651525 kubelet[3088]: I0117 12:24:57.651510 3088 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:24:57.651802 containerd[1827]: time="2025-01-17T12:24:57.651783142Z" level=info msg="StopPodSandbox for \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\"" Jan 17 12:24:57.651957 containerd[1827]: time="2025-01-17T12:24:57.651942987Z" level=info msg="Ensure that sandbox a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68 in task-service has been cleanup successfully" Jan 17 12:24:57.665516 containerd[1827]: time="2025-01-17T12:24:57.665487743Z" level=error msg="StopPodSandbox for \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\" failed" error="failed to destroy network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.665597 containerd[1827]: time="2025-01-17T12:24:57.665489112Z" level=error msg="StopPodSandbox for \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\" failed" error="failed to destroy network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.665597 containerd[1827]: time="2025-01-17T12:24:57.665566542Z" level=error msg="StopPodSandbox for \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\" failed" error="failed to destroy network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.665657 kubelet[3088]: E0117 12:24:57.665639 3088 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:24:57.665699 kubelet[3088]: E0117 12:24:57.665673 3088 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984"} Jan 17 12:24:57.665718 kubelet[3088]: E0117 12:24:57.665639 3088 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:24:57.665742 kubelet[3088]: E0117 12:24:57.665715 3088 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"becb30b0-7deb-4ba2-a030-4d72b985fd42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:57.665742 kubelet[3088]: E0117 12:24:57.665726 3088 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b"} Jan 17 12:24:57.665742 kubelet[3088]: E0117 12:24:57.665730 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"becb30b0-7deb-4ba2-a030-4d72b985fd42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc754cd95-7szqj" podUID="becb30b0-7deb-4ba2-a030-4d72b985fd42" Jan 17 12:24:57.665742 kubelet[3088]: E0117 12:24:57.665641 3088 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:24:57.665848 containerd[1827]: time="2025-01-17T12:24:57.665718937Z" level=error msg="StopPodSandbox for \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\" failed" error="failed to destroy network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.665871 kubelet[3088]: E0117 12:24:57.665743 3088 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb43c142-a013-4f25-834a-935fd8e973a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:57.665871 kubelet[3088]: E0117 12:24:57.665749 3088 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730"} Jan 17 12:24:57.665871 kubelet[3088]: E0117 12:24:57.665754 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb43c142-a013-4f25-834a-935fd8e973a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc754cd95-fjcl8" podUID="fb43c142-a013-4f25-834a-935fd8e973a8" Jan 17 12:24:57.665871 kubelet[3088]: E0117 12:24:57.665760 3088 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:57.665972 kubelet[3088]: E0117 12:24:57.665769 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qbhrr" podUID="d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7" Jan 17 12:24:57.665972 kubelet[3088]: E0117 12:24:57.665774 3088 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:24:57.665972 kubelet[3088]: E0117 12:24:57.665784 3088 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae"} Jan 17 12:24:57.665972 kubelet[3088]: E0117 12:24:57.665795 3088 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb8a82b8-47f1-4e69-9d26-f80868988f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:57.666083 kubelet[3088]: E0117 12:24:57.665804 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb8a82b8-47f1-4e69-9d26-f80868988f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dd9787d-lhszl" podUID="fb8a82b8-47f1-4e69-9d26-f80868988f6e" Jan 17 12:24:57.666877 containerd[1827]: time="2025-01-17T12:24:57.666862078Z" level=error msg="StopPodSandbox for \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\" failed" error="failed to destroy network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:57.666949 kubelet[3088]: E0117 12:24:57.666937 3088 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:24:57.666974 kubelet[3088]: E0117 12:24:57.666953 3088 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68"} Jan 17 12:24:57.666974 kubelet[3088]: E0117 12:24:57.666967 3088 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f91e660-ed9d-4c6c-8756-9789d37e6a0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:57.667057 kubelet[3088]: E0117 12:24:57.666980 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f91e660-ed9d-4c6c-8756-9789d37e6a0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b6qr5" podUID="9f91e660-ed9d-4c6c-8756-9789d37e6a0c" Jan 17 12:24:58.166895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68-shm.mount: Deactivated successfully. Jan 17 12:24:58.166951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b-shm.mount: Deactivated successfully. Jan 17 12:24:58.166984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730-shm.mount: Deactivated successfully. Jan 17 12:24:58.167038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984-shm.mount: Deactivated successfully. Jan 17 12:24:58.167084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae-shm.mount: Deactivated successfully. Jan 17 12:24:58.579992 systemd[1]: Created slice kubepods-besteffort-pod1b704db9_0e4b_4e28_94e0_d73625f21ba2.slice - libcontainer container kubepods-besteffort-pod1b704db9_0e4b_4e28_94e0_d73625f21ba2.slice. Jan 17 12:24:58.581210 containerd[1827]: time="2025-01-17T12:24:58.581192143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqjfk,Uid:1b704db9-0e4b-4e28-94e0-d73625f21ba2,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:58.611239 containerd[1827]: time="2025-01-17T12:24:58.611179787Z" level=error msg="Failed to destroy network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:58.611408 containerd[1827]: time="2025-01-17T12:24:58.611362032Z" level=error msg="encountered an error cleaning up failed sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:58.611408 containerd[1827]: time="2025-01-17T12:24:58.611397805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqjfk,Uid:1b704db9-0e4b-4e28-94e0-d73625f21ba2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:58.611605 kubelet[3088]: E0117 12:24:58.611553 3088 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:58.611605 kubelet[3088]: E0117 12:24:58.611598 3088 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqjfk" Jan 17 12:24:58.611673 kubelet[3088]: E0117 12:24:58.611612 3088 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rqjfk" Jan 17 12:24:58.611673 kubelet[3088]: E0117 12:24:58.611639 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rqjfk_calico-system(1b704db9-0e4b-4e28-94e0-d73625f21ba2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rqjfk_calico-system(1b704db9-0e4b-4e28-94e0-d73625f21ba2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqjfk" podUID="1b704db9-0e4b-4e28-94e0-d73625f21ba2" Jan 17 12:24:58.612661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f-shm.mount: Deactivated successfully. Jan 17 12:24:58.656811 kubelet[3088]: I0117 12:24:58.656704 3088 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:24:58.658141 containerd[1827]: time="2025-01-17T12:24:58.658021057Z" level=info msg="StopPodSandbox for \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\"" Jan 17 12:24:58.658528 containerd[1827]: time="2025-01-17T12:24:58.658435919Z" level=info msg="Ensure that sandbox 449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f in task-service has been cleanup successfully" Jan 17 12:24:58.704976 containerd[1827]: time="2025-01-17T12:24:58.704951012Z" level=error msg="StopPodSandbox for \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\" failed" error="failed to destroy network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:58.705137 kubelet[3088]: E0117 12:24:58.705090 3088 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:24:58.705137 kubelet[3088]: E0117 12:24:58.705124 3088 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f"} Jan 17 12:24:58.705196 kubelet[3088]: E0117 12:24:58.705147 3088 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b704db9-0e4b-4e28-94e0-d73625f21ba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:58.705196 kubelet[3088]: E0117 12:24:58.705161 3088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b704db9-0e4b-4e28-94e0-d73625f21ba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rqjfk" podUID="1b704db9-0e4b-4e28-94e0-d73625f21ba2" Jan 17 12:25:00.742460 kubelet[3088]: I0117 12:25:00.742403 3088 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:25:00.830808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457948628.mount: Deactivated successfully. Jan 17 12:25:00.851593 containerd[1827]: time="2025-01-17T12:25:00.851543949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:00.851824 containerd[1827]: time="2025-01-17T12:25:00.851782357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:25:00.852121 containerd[1827]: time="2025-01-17T12:25:00.852081717Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:00.852948 containerd[1827]: time="2025-01-17T12:25:00.852908063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:00.853348 containerd[1827]: time="2025-01-17T12:25:00.853303546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.204540518s" Jan 17 12:25:00.853348 containerd[1827]: time="2025-01-17T12:25:00.853320526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:25:00.856576 containerd[1827]: time="2025-01-17T12:25:00.856531149Z" level=info msg="CreateContainer within sandbox \"1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:25:00.866855 containerd[1827]: time="2025-01-17T12:25:00.866811630Z" level=info msg="CreateContainer within sandbox \"1ec42de508e63cec0630c5f4ca947ec0fb7adb09be5cd0206ceafff6c4c72737\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ca441657d7ad09865b373d2e3d064748f2ffbf1d1d520aecc1f57d38b2df3e5d\"" Jan 17 12:25:00.867041 containerd[1827]: time="2025-01-17T12:25:00.867028791Z" level=info msg="StartContainer for \"ca441657d7ad09865b373d2e3d064748f2ffbf1d1d520aecc1f57d38b2df3e5d\"" Jan 17 12:25:00.888318 systemd[1]: Started cri-containerd-ca441657d7ad09865b373d2e3d064748f2ffbf1d1d520aecc1f57d38b2df3e5d.scope - libcontainer container ca441657d7ad09865b373d2e3d064748f2ffbf1d1d520aecc1f57d38b2df3e5d. Jan 17 12:25:00.902013 containerd[1827]: time="2025-01-17T12:25:00.901985521Z" level=info msg="StartContainer for \"ca441657d7ad09865b373d2e3d064748f2ffbf1d1d520aecc1f57d38b2df3e5d\" returns successfully" Jan 17 12:25:00.970969 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:25:00.971027 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jan 17 12:25:01.684521 kubelet[3088]: I0117 12:25:01.684482 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8hvpl" podStartSLOduration=1.597183491 podStartE2EDuration="11.684468487s" podCreationTimestamp="2025-01-17 12:24:50 +0000 UTC" firstStartedPulling="2025-01-17 12:24:50.766373416 +0000 UTC m=+15.241754599" lastFinishedPulling="2025-01-17 12:25:00.853658419 +0000 UTC m=+25.329039595" observedRunningTime="2025-01-17 12:25:01.684362627 +0000 UTC m=+26.159743805" watchObservedRunningTime="2025-01-17 12:25:01.684468487 +0000 UTC m=+26.159849660" Jan 17 12:25:02.298049 kernel: bpftool[4635]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:25:02.444617 systemd-networkd[1614]: vxlan.calico: Link UP Jan 17 12:25:02.444621 systemd-networkd[1614]: vxlan.calico: Gained carrier Jan 17 12:25:04.146280 systemd-networkd[1614]: vxlan.calico: Gained IPv6LL Jan 17 12:25:08.577957 containerd[1827]: time="2025-01-17T12:25:08.577862730Z" level=info msg="StopPodSandbox for \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\"" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.627 [INFO][4818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.627 [INFO][4818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" iface="eth0" netns="/var/run/netns/cni-70b729cd-bc40-6a24-faa6-6eb07d8285bb" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.628 [INFO][4818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" iface="eth0" netns="/var/run/netns/cni-70b729cd-bc40-6a24-faa6-6eb07d8285bb" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.628 [INFO][4818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" iface="eth0" netns="/var/run/netns/cni-70b729cd-bc40-6a24-faa6-6eb07d8285bb" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.628 [INFO][4818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.628 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.682 [INFO][4834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.682 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.682 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.690 [WARNING][4834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.690 [INFO][4834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.692 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:08.698433 containerd[1827]: 2025-01-17 12:25:08.696 [INFO][4818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:08.699497 containerd[1827]: time="2025-01-17T12:25:08.698619300Z" level=info msg="TearDown network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\" successfully" Jan 17 12:25:08.699497 containerd[1827]: time="2025-01-17T12:25:08.698666197Z" level=info msg="StopPodSandbox for \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\" returns successfully" Jan 17 12:25:08.699630 containerd[1827]: time="2025-01-17T12:25:08.699551373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd9787d-lhszl,Uid:fb8a82b8-47f1-4e69-9d26-f80868988f6e,Namespace:calico-system,Attempt:1,}" Jan 17 12:25:08.701105 systemd[1]: run-netns-cni\x2d70b729cd\x2dbc40\x2d6a24\x2dfaa6\x2d6eb07d8285bb.mount: Deactivated successfully. Jan 17 12:25:08.755181 systemd-networkd[1614]: cali533acc7f5f1: Link UP Jan 17 12:25:08.755311 systemd-networkd[1614]: cali533acc7f5f1: Gained carrier Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.721 [INFO][4850] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0 calico-kube-controllers-6dd9787d- calico-system fb8a82b8-47f1-4e69-9d26-f80868988f6e 753 0 2025-01-17 12:24:50 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dd9787d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-4c6521d577 calico-kube-controllers-6dd9787d-lhszl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali533acc7f5f1 [] []}} ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.721 [INFO][4850] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.735 [INFO][4869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" HandleID="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.740 [INFO][4869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" HandleID="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-4c6521d577", "pod":"calico-kube-controllers-6dd9787d-lhszl", "timestamp":"2025-01-17 12:25:08.735351657 +0000 UTC"}, Hostname:"ci-4081.3.0-a-4c6521d577", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.740 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.740 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.740 [INFO][4869] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-4c6521d577' Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.741 [INFO][4869] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.743 [INFO][4869] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.745 [INFO][4869] ipam/ipam.go 489: Trying affinity for 192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.746 [INFO][4869] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.747 [INFO][4869] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.747 [INFO][4869] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.748 [INFO][4869] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.750 [INFO][4869] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.752 [INFO][4869] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.129/26] block=192.168.2.128/26 handle="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.752 [INFO][4869] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.129/26] handle="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.753 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:08.760865 containerd[1827]: 2025-01-17 12:25:08.753 [INFO][4869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.129/26] IPv6=[] ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" HandleID="k8s-pod-network.072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.761279 containerd[1827]: 2025-01-17 12:25:08.753 [INFO][4850] cni-plugin/k8s.go 386: Populated endpoint ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0", GenerateName:"calico-kube-controllers-6dd9787d-", Namespace:"calico-system", SelfLink:"", UID:"fb8a82b8-47f1-4e69-9d26-f80868988f6e", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd9787d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"", Pod:"calico-kube-controllers-6dd9787d-lhszl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali533acc7f5f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:08.761279 containerd[1827]: 2025-01-17 12:25:08.753 [INFO][4850] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.129/32] ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.761279 containerd[1827]: 2025-01-17 12:25:08.754 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali533acc7f5f1 ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.761279 containerd[1827]: 2025-01-17 12:25:08.755 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.761279 containerd[1827]: 2025-01-17 12:25:08.755 [INFO][4850] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0", GenerateName:"calico-kube-controllers-6dd9787d-", Namespace:"calico-system", SelfLink:"", UID:"fb8a82b8-47f1-4e69-9d26-f80868988f6e", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd9787d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c", Pod:"calico-kube-controllers-6dd9787d-lhszl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali533acc7f5f1", MAC:"32:63:b1:77:7a:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:08.761279 containerd[1827]: 2025-01-17 12:25:08.760 [INFO][4850] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c" Namespace="calico-system" Pod="calico-kube-controllers-6dd9787d-lhszl" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:08.770132 containerd[1827]: time="2025-01-17T12:25:08.769902132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:08.770132 containerd[1827]: time="2025-01-17T12:25:08.770120019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:08.770132 containerd[1827]: time="2025-01-17T12:25:08.770128234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:08.770239 containerd[1827]: time="2025-01-17T12:25:08.770170722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:08.797565 systemd[1]: Started cri-containerd-072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c.scope - libcontainer container 072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c. Jan 17 12:25:08.834425 containerd[1827]: time="2025-01-17T12:25:08.834328307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd9787d-lhszl,Uid:fb8a82b8-47f1-4e69-9d26-f80868988f6e,Namespace:calico-system,Attempt:1,} returns sandbox id \"072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c\"" Jan 17 12:25:08.835227 containerd[1827]: time="2025-01-17T12:25:08.835210213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:25:09.578545 containerd[1827]: time="2025-01-17T12:25:09.578391265Z" level=info msg="StopPodSandbox for \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\"" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.618 [INFO][4961] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.618 [INFO][4961] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" iface="eth0" netns="/var/run/netns/cni-3bd98c20-f873-adeb-8fc8-ee7560888f40" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.619 [INFO][4961] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" iface="eth0" netns="/var/run/netns/cni-3bd98c20-f873-adeb-8fc8-ee7560888f40" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.619 [INFO][4961] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" iface="eth0" netns="/var/run/netns/cni-3bd98c20-f873-adeb-8fc8-ee7560888f40" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.619 [INFO][4961] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.620 [INFO][4961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.634 [INFO][4973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.634 [INFO][4973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.634 [INFO][4973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.638 [WARNING][4973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.638 [INFO][4973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.639 [INFO][4973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:09.640827 containerd[1827]: 2025-01-17 12:25:09.640 [INFO][4961] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:09.641469 containerd[1827]: time="2025-01-17T12:25:09.640907115Z" level=info msg="TearDown network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\" successfully" Jan 17 12:25:09.641469 containerd[1827]: time="2025-01-17T12:25:09.640923049Z" level=info msg="StopPodSandbox for \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\" returns successfully" Jan 17 12:25:09.641469 containerd[1827]: time="2025-01-17T12:25:09.641413156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-7szqj,Uid:becb30b0-7deb-4ba2-a030-4d72b985fd42,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:25:09.696644 systemd-networkd[1614]: cali4f6432c2933: Link UP Jan 17 12:25:09.696777 systemd-networkd[1614]: cali4f6432c2933: Gained carrier Jan 17 12:25:09.701132 systemd[1]: run-netns-cni\x2d3bd98c20\x2df873\x2dadeb\x2d8fc8\x2dee7560888f40.mount: Deactivated successfully. Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.661 [INFO][4988] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0 calico-apiserver-7bc754cd95- calico-apiserver becb30b0-7deb-4ba2-a030-4d72b985fd42 760 0 2025-01-17 12:24:50 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc754cd95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-4c6521d577 calico-apiserver-7bc754cd95-7szqj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f6432c2933 [] []}} ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.661 [INFO][4988] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.676 [INFO][5005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" HandleID="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.680 [INFO][5005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" HandleID="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-4c6521d577", "pod":"calico-apiserver-7bc754cd95-7szqj", "timestamp":"2025-01-17 12:25:09.676000515 +0000 UTC"}, Hostname:"ci-4081.3.0-a-4c6521d577", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.680 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.680 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.680 [INFO][5005] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-4c6521d577' Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.681 [INFO][5005] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.683 [INFO][5005] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.686 [INFO][5005] ipam/ipam.go 489: Trying affinity for 192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.687 [INFO][5005] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.688 [INFO][5005] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.688 [INFO][5005] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.689 [INFO][5005] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.691 [INFO][5005] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.694 [INFO][5005] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.130/26] block=192.168.2.128/26 handle="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.694 [INFO][5005] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.130/26] handle="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.694 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:09.702635 containerd[1827]: 2025-01-17 12:25:09.694 [INFO][5005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.130/26] IPv6=[] ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" HandleID="k8s-pod-network.5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.703128 containerd[1827]: 2025-01-17 12:25:09.695 [INFO][4988] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"becb30b0-7deb-4ba2-a030-4d72b985fd42", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"", Pod:"calico-apiserver-7bc754cd95-7szqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f6432c2933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:09.703128 containerd[1827]: 2025-01-17 12:25:09.695 [INFO][4988] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.130/32] ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.703128 containerd[1827]: 2025-01-17 12:25:09.695 [INFO][4988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f6432c2933 ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.703128 containerd[1827]: 2025-01-17 12:25:09.696 [INFO][4988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.703128 containerd[1827]: 2025-01-17 12:25:09.696 [INFO][4988] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"becb30b0-7deb-4ba2-a030-4d72b985fd42", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d", Pod:"calico-apiserver-7bc754cd95-7szqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f6432c2933", MAC:"8a:32:51:e4:2c:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:09.703128 containerd[1827]: 2025-01-17 12:25:09.701 [INFO][4988] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-7szqj" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:09.711924 containerd[1827]: time="2025-01-17T12:25:09.711884494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:09.712120 containerd[1827]: time="2025-01-17T12:25:09.712083084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:09.712120 containerd[1827]: time="2025-01-17T12:25:09.712095555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:09.712170 containerd[1827]: time="2025-01-17T12:25:09.712136225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:09.736252 systemd[1]: Started cri-containerd-5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d.scope - libcontainer container 5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d. Jan 17 12:25:09.762725 containerd[1827]: time="2025-01-17T12:25:09.762699681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-7szqj,Uid:becb30b0-7deb-4ba2-a030-4d72b985fd42,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d\"" Jan 17 12:25:10.610317 systemd-networkd[1614]: cali533acc7f5f1: Gained IPv6LL Jan 17 12:25:11.314101 systemd-networkd[1614]: cali4f6432c2933: Gained IPv6LL Jan 17 12:25:11.349574 containerd[1827]: time="2025-01-17T12:25:11.349551916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:11.349774 containerd[1827]: time="2025-01-17T12:25:11.349750745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:25:11.350015 containerd[1827]: time="2025-01-17T12:25:11.349997861Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:11.351134 containerd[1827]: time="2025-01-17T12:25:11.351090892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:11.351535 containerd[1827]: time="2025-01-17T12:25:11.351498830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.516267889s" Jan 17 12:25:11.351535 containerd[1827]: time="2025-01-17T12:25:11.351514508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:25:11.352064 containerd[1827]: time="2025-01-17T12:25:11.352012846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:25:11.354827 containerd[1827]: time="2025-01-17T12:25:11.354811621Z" level=info msg="CreateContainer within sandbox \"072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:25:11.358916 containerd[1827]: time="2025-01-17T12:25:11.358873205Z" level=info msg="CreateContainer within sandbox \"072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9a9a9f679927ffc273c8ccf91a64259602ab7c29f5a81fa0f8e845dd00524960\"" Jan 17 12:25:11.359120 containerd[1827]: time="2025-01-17T12:25:11.359084294Z" level=info msg="StartContainer for \"9a9a9f679927ffc273c8ccf91a64259602ab7c29f5a81fa0f8e845dd00524960\"" Jan 17 12:25:11.383305 systemd[1]: Started cri-containerd-9a9a9f679927ffc273c8ccf91a64259602ab7c29f5a81fa0f8e845dd00524960.scope - libcontainer container 9a9a9f679927ffc273c8ccf91a64259602ab7c29f5a81fa0f8e845dd00524960. Jan 17 12:25:11.406210 containerd[1827]: time="2025-01-17T12:25:11.406161019Z" level=info msg="StartContainer for \"9a9a9f679927ffc273c8ccf91a64259602ab7c29f5a81fa0f8e845dd00524960\" returns successfully" Jan 17 12:25:11.578519 containerd[1827]: time="2025-01-17T12:25:11.578237593Z" level=info msg="StopPodSandbox for \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\"" Jan 17 12:25:11.579168 containerd[1827]: time="2025-01-17T12:25:11.578230672Z" level=info msg="StopPodSandbox for \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\"" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5162] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5162] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" iface="eth0" netns="/var/run/netns/cni-ea94070d-4ade-4bf7-574c-676267d978ff" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5162] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" iface="eth0" netns="/var/run/netns/cni-ea94070d-4ade-4bf7-574c-676267d978ff" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5162] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" iface="eth0" netns="/var/run/netns/cni-ea94070d-4ade-4bf7-574c-676267d978ff" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5162] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.618 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.618 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.618 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.621 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.621 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.622 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:11.623800 containerd[1827]: 2025-01-17 12:25:11.623 [INFO][5162] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:11.624102 containerd[1827]: time="2025-01-17T12:25:11.623885777Z" level=info msg="TearDown network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\" successfully" Jan 17 12:25:11.624102 containerd[1827]: time="2025-01-17T12:25:11.623905807Z" level=info msg="StopPodSandbox for \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\" returns successfully" Jan 17 12:25:11.624297 containerd[1827]: time="2025-01-17T12:25:11.624284101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b6qr5,Uid:9f91e660-ed9d-4c6c-8756-9789d37e6a0c,Namespace:kube-system,Attempt:1,}" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5161] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5161] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" iface="eth0" netns="/var/run/netns/cni-a53cb8c1-5ac9-b6e1-de28-91e395a4351b" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5161] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" iface="eth0" netns="/var/run/netns/cni-a53cb8c1-5ac9-b6e1-de28-91e395a4351b" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5161] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" iface="eth0" netns="/var/run/netns/cni-a53cb8c1-5ac9-b6e1-de28-91e395a4351b" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5161] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.607 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.618 [INFO][5191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.618 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.622 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.625 [WARNING][5191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.625 [INFO][5191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.626 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:11.627888 containerd[1827]: 2025-01-17 12:25:11.627 [INFO][5161] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:11.628237 containerd[1827]: time="2025-01-17T12:25:11.627949992Z" level=info msg="TearDown network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\" successfully" Jan 17 12:25:11.628237 containerd[1827]: time="2025-01-17T12:25:11.627961816Z" level=info msg="StopPodSandbox for \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\" returns successfully" Jan 17 12:25:11.628443 containerd[1827]: time="2025-01-17T12:25:11.628385612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-fjcl8,Uid:fb43c142-a013-4f25-834a-935fd8e973a8,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:25:11.719725 systemd-networkd[1614]: cali6496b95da08: Link UP Jan 17 12:25:11.720698 systemd-networkd[1614]: cali6496b95da08: Gained carrier Jan 17 12:25:11.721183 kubelet[3088]: I0117 12:25:11.720245 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dd9787d-lhszl" podStartSLOduration=19.20329033 podStartE2EDuration="21.720191495s" podCreationTimestamp="2025-01-17 12:24:50 +0000 UTC" firstStartedPulling="2025-01-17 12:25:08.835046091 +0000 UTC m=+33.310427270" lastFinishedPulling="2025-01-17 12:25:11.351947259 +0000 UTC m=+35.827328435" observedRunningTime="2025-01-17 12:25:11.719364046 +0000 UTC m=+36.194745310" watchObservedRunningTime="2025-01-17 12:25:11.720191495 +0000 UTC m=+36.195572737" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.651 [INFO][5238] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0 calico-apiserver-7bc754cd95- calico-apiserver fb43c142-a013-4f25-834a-935fd8e973a8 775 0 2025-01-17 12:24:50 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc754cd95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-4c6521d577 calico-apiserver-7bc754cd95-fjcl8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6496b95da08 [] []}} ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.651 [INFO][5238] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.664 [INFO][5272] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" HandleID="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.669 [INFO][5272] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" HandleID="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-4c6521d577", "pod":"calico-apiserver-7bc754cd95-fjcl8", "timestamp":"2025-01-17 12:25:11.664020911 +0000 UTC"}, Hostname:"ci-4081.3.0-a-4c6521d577", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.669 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.669 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.669 [INFO][5272] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-4c6521d577' Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.670 [INFO][5272] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.672 [INFO][5272] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.674 [INFO][5272] ipam/ipam.go 489: Trying affinity for 192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.675 [INFO][5272] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.676 [INFO][5272] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.676 [INFO][5272] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.677 [INFO][5272] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.699 [INFO][5272] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.710 [INFO][5272] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.131/26] block=192.168.2.128/26 handle="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.710 [INFO][5272] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.131/26] handle="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.710 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:11.741246 containerd[1827]: 2025-01-17 12:25:11.711 [INFO][5272] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.131/26] IPv6=[] ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" HandleID="k8s-pod-network.a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.742101 containerd[1827]: 2025-01-17 12:25:11.714 [INFO][5238] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb43c142-a013-4f25-834a-935fd8e973a8", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"", Pod:"calico-apiserver-7bc754cd95-fjcl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6496b95da08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:11.742101 containerd[1827]: 2025-01-17 12:25:11.715 [INFO][5238] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.131/32] ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.742101 containerd[1827]: 2025-01-17 12:25:11.715 [INFO][5238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6496b95da08 ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.742101 containerd[1827]: 2025-01-17 12:25:11.720 [INFO][5238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.742101 containerd[1827]: 2025-01-17 12:25:11.720 [INFO][5238] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb43c142-a013-4f25-834a-935fd8e973a8", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d", Pod:"calico-apiserver-7bc754cd95-fjcl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6496b95da08", MAC:"8e:54:e4:a5:fd:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:11.742101 containerd[1827]: 2025-01-17 12:25:11.738 [INFO][5238] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc754cd95-fjcl8" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:11.752851 containerd[1827]: time="2025-01-17T12:25:11.752783197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:11.752851 containerd[1827]: time="2025-01-17T12:25:11.752812747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:11.752851 containerd[1827]: time="2025-01-17T12:25:11.752823658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:11.752962 containerd[1827]: time="2025-01-17T12:25:11.752870082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:11.758657 systemd[1]: Started cri-containerd-a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d.scope - libcontainer container a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d. Jan 17 12:25:11.781393 containerd[1827]: time="2025-01-17T12:25:11.781371629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc754cd95-fjcl8,Uid:fb43c142-a013-4f25-834a-935fd8e973a8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d\"" Jan 17 12:25:11.784798 systemd-networkd[1614]: cali04a8ab8d33b: Link UP Jan 17 12:25:11.784927 systemd-networkd[1614]: cali04a8ab8d33b: Gained carrier Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.649 [INFO][5226] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0 coredns-6f6b679f8f- kube-system 9f91e660-ed9d-4c6c-8756-9789d37e6a0c 776 0 2025-01-17 12:24:42 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-4c6521d577 coredns-6f6b679f8f-b6qr5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04a8ab8d33b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.649 [INFO][5226] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.664 [INFO][5271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" HandleID="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.669 [INFO][5271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" HandleID="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c7080), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-4c6521d577", "pod":"coredns-6f6b679f8f-b6qr5", "timestamp":"2025-01-17 12:25:11.664025204 +0000 UTC"}, Hostname:"ci-4081.3.0-a-4c6521d577", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.669 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.710 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.711 [INFO][5271] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-4c6521d577' Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.770 [INFO][5271] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.773 [INFO][5271] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.775 [INFO][5271] ipam/ipam.go 489: Trying affinity for 192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.776 [INFO][5271] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.777 [INFO][5271] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.777 [INFO][5271] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.778 [INFO][5271] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.780 [INFO][5271] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.783 [INFO][5271] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.132/26] block=192.168.2.128/26 handle="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.783 [INFO][5271] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.132/26] handle="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.783 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:11.790374 containerd[1827]: 2025-01-17 12:25:11.783 [INFO][5271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.132/26] IPv6=[] ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" HandleID="k8s-pod-network.bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.790771 containerd[1827]: 2025-01-17 12:25:11.784 [INFO][5226] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9f91e660-ed9d-4c6c-8756-9789d37e6a0c", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"", Pod:"coredns-6f6b679f8f-b6qr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04a8ab8d33b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:11.790771 containerd[1827]: 2025-01-17 12:25:11.784 [INFO][5226] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.132/32] ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.790771 containerd[1827]: 2025-01-17 12:25:11.784 [INFO][5226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04a8ab8d33b ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.790771 containerd[1827]: 2025-01-17 12:25:11.784 [INFO][5226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.790771 containerd[1827]: 2025-01-17 12:25:11.785 [INFO][5226] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9f91e660-ed9d-4c6c-8756-9789d37e6a0c", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c", Pod:"coredns-6f6b679f8f-b6qr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04a8ab8d33b", MAC:"aa:07:dd:7c:ba:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:11.790771 containerd[1827]: 2025-01-17 12:25:11.789 [INFO][5226] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c" Namespace="kube-system" Pod="coredns-6f6b679f8f-b6qr5" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:11.799273 containerd[1827]: time="2025-01-17T12:25:11.799231206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:11.799273 containerd[1827]: time="2025-01-17T12:25:11.799262673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:11.799372 containerd[1827]: time="2025-01-17T12:25:11.799273854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:11.799372 containerd[1827]: time="2025-01-17T12:25:11.799324539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:11.818479 systemd[1]: Started cri-containerd-bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c.scope - libcontainer container bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c. Jan 17 12:25:11.897503 containerd[1827]: time="2025-01-17T12:25:11.897475789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b6qr5,Uid:9f91e660-ed9d-4c6c-8756-9789d37e6a0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c\"" Jan 17 12:25:11.898958 containerd[1827]: time="2025-01-17T12:25:11.898934681Z" level=info msg="CreateContainer within sandbox \"bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:25:11.904779 containerd[1827]: time="2025-01-17T12:25:11.904758691Z" level=info msg="CreateContainer within sandbox \"bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22e0913e1799dc231171dfaedbf053ed387403b252fb98d11e78c4008cfbf975\"" Jan 17 12:25:11.905036 containerd[1827]: time="2025-01-17T12:25:11.905024671Z" level=info msg="StartContainer for \"22e0913e1799dc231171dfaedbf053ed387403b252fb98d11e78c4008cfbf975\"" Jan 17 12:25:11.930398 systemd[1]: Started cri-containerd-22e0913e1799dc231171dfaedbf053ed387403b252fb98d11e78c4008cfbf975.scope - libcontainer container 22e0913e1799dc231171dfaedbf053ed387403b252fb98d11e78c4008cfbf975. Jan 17 12:25:11.984258 containerd[1827]: time="2025-01-17T12:25:11.984221280Z" level=info msg="StartContainer for \"22e0913e1799dc231171dfaedbf053ed387403b252fb98d11e78c4008cfbf975\" returns successfully" Jan 17 12:25:12.360782 systemd[1]: run-netns-cni\x2da53cb8c1\x2d5ac9\x2db6e1\x2dde28\x2d91e395a4351b.mount: Deactivated successfully. Jan 17 12:25:12.360852 systemd[1]: run-netns-cni\x2dea94070d\x2d4ade\x2d4bf7\x2d574c\x2d676267d978ff.mount: Deactivated successfully. Jan 17 12:25:12.577927 containerd[1827]: time="2025-01-17T12:25:12.577804548Z" level=info msg="StopPodSandbox for \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\"" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.620 [INFO][5494] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.620 [INFO][5494] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" iface="eth0" netns="/var/run/netns/cni-86ffc7b8-0c06-742f-353e-45d46b9a3cbd" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.620 [INFO][5494] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" iface="eth0" netns="/var/run/netns/cni-86ffc7b8-0c06-742f-353e-45d46b9a3cbd" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.620 [INFO][5494] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" iface="eth0" netns="/var/run/netns/cni-86ffc7b8-0c06-742f-353e-45d46b9a3cbd" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.620 [INFO][5494] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.620 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.632 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.632 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.632 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.636 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.636 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.637 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:12.638505 containerd[1827]: 2025-01-17 12:25:12.637 [INFO][5494] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:12.639001 containerd[1827]: time="2025-01-17T12:25:12.638642625Z" level=info msg="TearDown network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\" successfully" Jan 17 12:25:12.639001 containerd[1827]: time="2025-01-17T12:25:12.638679895Z" level=info msg="StopPodSandbox for \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\" returns successfully" Jan 17 12:25:12.639085 containerd[1827]: time="2025-01-17T12:25:12.639012121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qbhrr,Uid:d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7,Namespace:kube-system,Attempt:1,}" Jan 17 12:25:12.640460 systemd[1]: run-netns-cni\x2d86ffc7b8\x2d0c06\x2d742f\x2d353e\x2d45d46b9a3cbd.mount: Deactivated successfully. Jan 17 12:25:12.710775 systemd-networkd[1614]: cali9e6902ac5c9: Link UP Jan 17 12:25:12.711680 systemd-networkd[1614]: cali9e6902ac5c9: Gained carrier Jan 17 12:25:12.728696 kubelet[3088]: I0117 12:25:12.728534 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-b6qr5" podStartSLOduration=30.728491436 podStartE2EDuration="30.728491436s" podCreationTimestamp="2025-01-17 12:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:25:12.727999361 +0000 UTC m=+37.203380599" watchObservedRunningTime="2025-01-17 12:25:12.728491436 +0000 UTC m=+37.203872643" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.662 [INFO][5526] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0 coredns-6f6b679f8f- kube-system d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7 794 0 2025-01-17 12:24:42 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-4c6521d577 coredns-6f6b679f8f-qbhrr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e6902ac5c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.662 [INFO][5526] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.676 [INFO][5552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" HandleID="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.681 [INFO][5552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" HandleID="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f55e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-4c6521d577", "pod":"coredns-6f6b679f8f-qbhrr", "timestamp":"2025-01-17 12:25:12.67603922 +0000 UTC"}, Hostname:"ci-4081.3.0-a-4c6521d577", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.681 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.681 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.681 [INFO][5552] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-4c6521d577' Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.682 [INFO][5552] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.684 [INFO][5552] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.686 [INFO][5552] ipam/ipam.go 489: Trying affinity for 192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.687 [INFO][5552] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.689 [INFO][5552] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.689 [INFO][5552] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.690 [INFO][5552] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.693 [INFO][5552] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.700 [INFO][5552] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.133/26] block=192.168.2.128/26 handle="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.701 [INFO][5552] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.133/26] handle="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.701 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:12.733141 containerd[1827]: 2025-01-17 12:25:12.701 [INFO][5552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.133/26] IPv6=[] ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" HandleID="k8s-pod-network.6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.734437 containerd[1827]: 2025-01-17 12:25:12.706 [INFO][5526] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"", Pod:"coredns-6f6b679f8f-qbhrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e6902ac5c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:12.734437 containerd[1827]: 2025-01-17 12:25:12.707 [INFO][5526] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.133/32] ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.734437 containerd[1827]: 2025-01-17 12:25:12.707 [INFO][5526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e6902ac5c9 ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.734437 containerd[1827]: 2025-01-17 12:25:12.711 [INFO][5526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.734437 containerd[1827]: 2025-01-17 12:25:12.711 [INFO][5526] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb", Pod:"coredns-6f6b679f8f-qbhrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e6902ac5c9", MAC:"2e:c4:59:cd:e0:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:12.734437 containerd[1827]: 2025-01-17 12:25:12.730 [INFO][5526] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb" Namespace="kube-system" Pod="coredns-6f6b679f8f-qbhrr" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:12.745671 containerd[1827]: time="2025-01-17T12:25:12.745590528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:12.745815 containerd[1827]: time="2025-01-17T12:25:12.745800826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:12.745815 containerd[1827]: time="2025-01-17T12:25:12.745810290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:12.745865 containerd[1827]: time="2025-01-17T12:25:12.745855353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:12.779268 systemd[1]: Started cri-containerd-6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb.scope - libcontainer container 6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb. Jan 17 12:25:12.786119 systemd-networkd[1614]: cali6496b95da08: Gained IPv6LL Jan 17 12:25:12.808589 containerd[1827]: time="2025-01-17T12:25:12.808529379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qbhrr,Uid:d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7,Namespace:kube-system,Attempt:1,} returns sandbox id \"6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb\"" Jan 17 12:25:12.810060 containerd[1827]: time="2025-01-17T12:25:12.810039097Z" level=info msg="CreateContainer within sandbox \"6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:25:12.816413 containerd[1827]: time="2025-01-17T12:25:12.816367492Z" level=info msg="CreateContainer within sandbox \"6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1bcc166eb9b19cb5388f9abe27b9fccbd49f81d49ea8a2283433dfdb4c970ed\"" Jan 17 12:25:12.816641 containerd[1827]: time="2025-01-17T12:25:12.816628082Z" level=info msg="StartContainer for \"f1bcc166eb9b19cb5388f9abe27b9fccbd49f81d49ea8a2283433dfdb4c970ed\"" Jan 17 12:25:12.841214 systemd[1]: Started cri-containerd-f1bcc166eb9b19cb5388f9abe27b9fccbd49f81d49ea8a2283433dfdb4c970ed.scope - libcontainer container f1bcc166eb9b19cb5388f9abe27b9fccbd49f81d49ea8a2283433dfdb4c970ed. Jan 17 12:25:12.852895 containerd[1827]: time="2025-01-17T12:25:12.852873340Z" level=info msg="StartContainer for \"f1bcc166eb9b19cb5388f9abe27b9fccbd49f81d49ea8a2283433dfdb4c970ed\" returns successfully" Jan 17 12:25:12.914510 systemd-networkd[1614]: cali04a8ab8d33b: Gained IPv6LL Jan 17 12:25:13.583977 containerd[1827]: time="2025-01-17T12:25:13.583953478Z" level=info msg="StopPodSandbox for \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\"" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.619 [INFO][5696] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.619 [INFO][5696] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" iface="eth0" netns="/var/run/netns/cni-849d9519-cbb4-f843-f63b-8263404642e3" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.620 [INFO][5696] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" iface="eth0" netns="/var/run/netns/cni-849d9519-cbb4-f843-f63b-8263404642e3" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.620 [INFO][5696] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" iface="eth0" netns="/var/run/netns/cni-849d9519-cbb4-f843-f63b-8263404642e3" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.620 [INFO][5696] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.620 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.658 [INFO][5710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.658 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.658 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.664 [WARNING][5710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.665 [INFO][5710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.666 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:13.669229 containerd[1827]: 2025-01-17 12:25:13.667 [INFO][5696] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:13.670161 containerd[1827]: time="2025-01-17T12:25:13.669372334Z" level=info msg="TearDown network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\" successfully" Jan 17 12:25:13.670161 containerd[1827]: time="2025-01-17T12:25:13.669400388Z" level=info msg="StopPodSandbox for \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\" returns successfully" Jan 17 12:25:13.670161 containerd[1827]: time="2025-01-17T12:25:13.670051823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqjfk,Uid:1b704db9-0e4b-4e28-94e0-d73625f21ba2,Namespace:calico-system,Attempt:1,}" Jan 17 12:25:13.673080 systemd[1]: run-netns-cni\x2d849d9519\x2dcbb4\x2df843\x2df63b\x2d8263404642e3.mount: Deactivated successfully. Jan 17 12:25:13.723706 kubelet[3088]: I0117 12:25:13.723671 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qbhrr" podStartSLOduration=31.723658191 podStartE2EDuration="31.723658191s" podCreationTimestamp="2025-01-17 12:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:25:13.723547351 +0000 UTC m=+38.198928526" watchObservedRunningTime="2025-01-17 12:25:13.723658191 +0000 UTC m=+38.199039362" Jan 17 12:25:13.812948 systemd-networkd[1614]: calice849dcd52e: Link UP Jan 17 12:25:13.813069 systemd-networkd[1614]: calice849dcd52e: Gained carrier Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.758 [INFO][5730] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0 csi-node-driver- calico-system 1b704db9-0e4b-4e28-94e0-d73625f21ba2 812 0 2025-01-17 12:24:50 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-4c6521d577 csi-node-driver-rqjfk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calice849dcd52e [] []}} ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.758 [INFO][5730] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.773 [INFO][5753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" HandleID="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.779 [INFO][5753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" HandleID="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-4c6521d577", "pod":"csi-node-driver-rqjfk", "timestamp":"2025-01-17 12:25:13.773675479 +0000 UTC"}, Hostname:"ci-4081.3.0-a-4c6521d577", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.779 [INFO][5753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.779 [INFO][5753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.779 [INFO][5753] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-4c6521d577' Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.780 [INFO][5753] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.782 [INFO][5753] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.784 [INFO][5753] ipam/ipam.go 489: Trying affinity for 192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.785 [INFO][5753] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.786 [INFO][5753] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.786 [INFO][5753] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.787 [INFO][5753] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28 Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.806 [INFO][5753] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.810 [INFO][5753] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.134/26] block=192.168.2.128/26 handle="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.810 [INFO][5753] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.134/26] handle="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" host="ci-4081.3.0-a-4c6521d577" Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.810 [INFO][5753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:13.818902 containerd[1827]: 2025-01-17 12:25:13.810 [INFO][5753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.134/26] IPv6=[] ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" HandleID="k8s-pod-network.8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.819347 containerd[1827]: 2025-01-17 12:25:13.811 [INFO][5730] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b704db9-0e4b-4e28-94e0-d73625f21ba2", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"", Pod:"csi-node-driver-rqjfk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice849dcd52e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:13.819347 containerd[1827]: 2025-01-17 12:25:13.812 [INFO][5730] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.134/32] ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.819347 containerd[1827]: 2025-01-17 12:25:13.812 [INFO][5730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice849dcd52e ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.819347 containerd[1827]: 2025-01-17 12:25:13.813 [INFO][5730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.819347 containerd[1827]: 2025-01-17 12:25:13.813 [INFO][5730] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b704db9-0e4b-4e28-94e0-d73625f21ba2", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28", Pod:"csi-node-driver-rqjfk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice849dcd52e", MAC:"fa:38:df:be:1d:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:13.819347 containerd[1827]: 2025-01-17 12:25:13.818 [INFO][5730] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28" Namespace="calico-system" Pod="csi-node-driver-rqjfk" WorkloadEndpoint="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:13.828655 containerd[1827]: time="2025-01-17T12:25:13.828596243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:13.828838 containerd[1827]: time="2025-01-17T12:25:13.828819034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:13.828881 containerd[1827]: time="2025-01-17T12:25:13.828834664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:13.828922 containerd[1827]: time="2025-01-17T12:25:13.828880743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:13.848157 systemd[1]: Started cri-containerd-8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28.scope - libcontainer container 8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28. Jan 17 12:25:13.858568 containerd[1827]: time="2025-01-17T12:25:13.858542722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rqjfk,Uid:1b704db9-0e4b-4e28-94e0-d73625f21ba2,Namespace:calico-system,Attempt:1,} returns sandbox id \"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28\"" Jan 17 12:25:14.022264 containerd[1827]: time="2025-01-17T12:25:14.022240886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:14.022507 containerd[1827]: time="2025-01-17T12:25:14.022486720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:25:14.022813 containerd[1827]: time="2025-01-17T12:25:14.022799918Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:14.023823 containerd[1827]: time="2025-01-17T12:25:14.023808268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:14.024287 containerd[1827]: time="2025-01-17T12:25:14.024272108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.672244393s" Jan 17 12:25:14.024337 containerd[1827]: time="2025-01-17T12:25:14.024290258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:25:14.024829 containerd[1827]: time="2025-01-17T12:25:14.024814180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:25:14.025349 containerd[1827]: time="2025-01-17T12:25:14.025334972Z" level=info msg="CreateContainer within sandbox \"5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:25:14.028887 containerd[1827]: time="2025-01-17T12:25:14.028849067Z" level=info msg="CreateContainer within sandbox \"5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95ce4b69baa499c1482162f1c478d9437636bcdcebd49684455f2e8cf59580d0\"" Jan 17 12:25:14.029147 containerd[1827]: time="2025-01-17T12:25:14.029133962Z" level=info msg="StartContainer for \"95ce4b69baa499c1482162f1c478d9437636bcdcebd49684455f2e8cf59580d0\"" Jan 17 12:25:14.045285 systemd[1]: Started cri-containerd-95ce4b69baa499c1482162f1c478d9437636bcdcebd49684455f2e8cf59580d0.scope - libcontainer container 95ce4b69baa499c1482162f1c478d9437636bcdcebd49684455f2e8cf59580d0. Jan 17 12:25:14.070769 containerd[1827]: time="2025-01-17T12:25:14.070747042Z" level=info msg="StartContainer for \"95ce4b69baa499c1482162f1c478d9437636bcdcebd49684455f2e8cf59580d0\" returns successfully" Jan 17 12:25:14.130144 systemd-networkd[1614]: cali9e6902ac5c9: Gained IPv6LL Jan 17 12:25:14.387891 containerd[1827]: time="2025-01-17T12:25:14.387784422Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:14.387976 containerd[1827]: time="2025-01-17T12:25:14.387952539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:25:14.389441 containerd[1827]: time="2025-01-17T12:25:14.389405047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 364.573018ms" Jan 17 12:25:14.389501 containerd[1827]: time="2025-01-17T12:25:14.389444048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:25:14.389947 containerd[1827]: time="2025-01-17T12:25:14.389936163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:25:14.390500 containerd[1827]: time="2025-01-17T12:25:14.390486686Z" level=info msg="CreateContainer within sandbox \"a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:25:14.394811 containerd[1827]: time="2025-01-17T12:25:14.394792391Z" level=info msg="CreateContainer within sandbox \"a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"baf7c47616194cf1a5191b4241f917fbbc39f5aa4497f72038202f477d0680c8\"" Jan 17 12:25:14.395179 containerd[1827]: time="2025-01-17T12:25:14.395121304Z" level=info msg="StartContainer for \"baf7c47616194cf1a5191b4241f917fbbc39f5aa4497f72038202f477d0680c8\"" Jan 17 12:25:14.418211 systemd[1]: Started cri-containerd-baf7c47616194cf1a5191b4241f917fbbc39f5aa4497f72038202f477d0680c8.scope - libcontainer container baf7c47616194cf1a5191b4241f917fbbc39f5aa4497f72038202f477d0680c8. Jan 17 12:25:14.446483 containerd[1827]: time="2025-01-17T12:25:14.446460156Z" level=info msg="StartContainer for \"baf7c47616194cf1a5191b4241f917fbbc39f5aa4497f72038202f477d0680c8\" returns successfully" Jan 17 12:25:14.724615 kubelet[3088]: I0117 12:25:14.724524 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bc754cd95-7szqj" podStartSLOduration=20.463085003 podStartE2EDuration="24.724512986s" podCreationTimestamp="2025-01-17 12:24:50 +0000 UTC" firstStartedPulling="2025-01-17 12:25:09.763310801 +0000 UTC m=+34.238691980" lastFinishedPulling="2025-01-17 12:25:14.024738783 +0000 UTC m=+38.500119963" observedRunningTime="2025-01-17 12:25:14.724408528 +0000 UTC m=+39.199789706" watchObservedRunningTime="2025-01-17 12:25:14.724512986 +0000 UTC m=+39.199894161" Jan 17 12:25:14.734879 kubelet[3088]: I0117 12:25:14.734837 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bc754cd95-fjcl8" podStartSLOduration=22.126877478 podStartE2EDuration="24.734822912s" podCreationTimestamp="2025-01-17 12:24:50 +0000 UTC" firstStartedPulling="2025-01-17 12:25:11.781933137 +0000 UTC m=+36.257314313" lastFinishedPulling="2025-01-17 12:25:14.389878572 +0000 UTC m=+38.865259747" observedRunningTime="2025-01-17 12:25:14.734754495 +0000 UTC m=+39.210135671" watchObservedRunningTime="2025-01-17 12:25:14.734822912 +0000 UTC m=+39.210204087" Jan 17 12:25:15.410543 systemd-networkd[1614]: calice849dcd52e: Gained IPv6LL Jan 17 12:25:15.710447 containerd[1827]: time="2025-01-17T12:25:15.710391590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:15.710645 containerd[1827]: time="2025-01-17T12:25:15.710591547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:25:15.710983 containerd[1827]: time="2025-01-17T12:25:15.710972353Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:15.712119 containerd[1827]: time="2025-01-17T12:25:15.712065367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:15.712379 containerd[1827]: time="2025-01-17T12:25:15.712330923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.322380539s" Jan 17 12:25:15.712379 containerd[1827]: time="2025-01-17T12:25:15.712346122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:25:15.713814 containerd[1827]: time="2025-01-17T12:25:15.713775152Z" level=info msg="CreateContainer within sandbox \"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:25:15.720234 containerd[1827]: time="2025-01-17T12:25:15.719723368Z" level=info msg="CreateContainer within sandbox \"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c94555d91e9763c659cfe72c033610d2bca750ab5f5aac06bd7e4d8826173bd4\"" Jan 17 12:25:15.720620 containerd[1827]: time="2025-01-17T12:25:15.720605283Z" level=info msg="StartContainer for \"c94555d91e9763c659cfe72c033610d2bca750ab5f5aac06bd7e4d8826173bd4\"" Jan 17 12:25:15.721811 kubelet[3088]: I0117 12:25:15.721799 3088 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:25:15.746359 systemd[1]: Started cri-containerd-c94555d91e9763c659cfe72c033610d2bca750ab5f5aac06bd7e4d8826173bd4.scope - libcontainer container c94555d91e9763c659cfe72c033610d2bca750ab5f5aac06bd7e4d8826173bd4. Jan 17 12:25:15.759173 containerd[1827]: time="2025-01-17T12:25:15.759121838Z" level=info msg="StartContainer for \"c94555d91e9763c659cfe72c033610d2bca750ab5f5aac06bd7e4d8826173bd4\" returns successfully" Jan 17 12:25:15.759732 containerd[1827]: time="2025-01-17T12:25:15.759718424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:25:17.061532 containerd[1827]: time="2025-01-17T12:25:17.061504909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.061819 containerd[1827]: time="2025-01-17T12:25:17.061710120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:25:17.062120 containerd[1827]: time="2025-01-17T12:25:17.062077830Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.063191 containerd[1827]: time="2025-01-17T12:25:17.063147158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.063599 containerd[1827]: time="2025-01-17T12:25:17.063557473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.303820188s" Jan 17 12:25:17.063599 containerd[1827]: time="2025-01-17T12:25:17.063574920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:25:17.064467 containerd[1827]: time="2025-01-17T12:25:17.064455478Z" level=info msg="CreateContainer within sandbox \"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:25:17.071312 containerd[1827]: time="2025-01-17T12:25:17.071268535Z" level=info msg="CreateContainer within sandbox \"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9348adf7607ab95df1f7909752748aade05de749d0cbe8a85890cd5a0d08ee60\"" Jan 17 12:25:17.071543 containerd[1827]: time="2025-01-17T12:25:17.071530751Z" level=info msg="StartContainer for \"9348adf7607ab95df1f7909752748aade05de749d0cbe8a85890cd5a0d08ee60\"" Jan 17 12:25:17.093331 systemd[1]: Started cri-containerd-9348adf7607ab95df1f7909752748aade05de749d0cbe8a85890cd5a0d08ee60.scope - libcontainer container 9348adf7607ab95df1f7909752748aade05de749d0cbe8a85890cd5a0d08ee60. Jan 17 12:25:17.105799 containerd[1827]: time="2025-01-17T12:25:17.105777907Z" level=info msg="StartContainer for \"9348adf7607ab95df1f7909752748aade05de749d0cbe8a85890cd5a0d08ee60\" returns successfully" Jan 17 12:25:17.621957 kubelet[3088]: I0117 12:25:17.621892 3088 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:25:17.621957 kubelet[3088]: I0117 12:25:17.621970 3088 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:25:17.761282 kubelet[3088]: I0117 12:25:17.761168 3088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rqjfk" podStartSLOduration=24.55641464 podStartE2EDuration="27.76113223s" podCreationTimestamp="2025-01-17 12:24:50 +0000 UTC" firstStartedPulling="2025-01-17 12:25:13.859184814 +0000 UTC m=+38.334565989" lastFinishedPulling="2025-01-17 12:25:17.063902403 +0000 UTC m=+41.539283579" observedRunningTime="2025-01-17 12:25:17.760284306 +0000 UTC m=+42.235665631" watchObservedRunningTime="2025-01-17 12:25:17.76113223 +0000 UTC m=+42.236513456" Jan 17 12:25:29.626956 kubelet[3088]: I0117 12:25:29.626893 3088 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:25:35.573687 containerd[1827]: time="2025-01-17T12:25:35.573448826Z" level=info msg="StopPodSandbox for \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\"" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.638 [WARNING][6103] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9f91e660-ed9d-4c6c-8756-9789d37e6a0c", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c", Pod:"coredns-6f6b679f8f-b6qr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04a8ab8d33b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.638 [INFO][6103] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.638 [INFO][6103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" iface="eth0" netns="" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.638 [INFO][6103] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.638 [INFO][6103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.661 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.661 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.661 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.667 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.667 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.669 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.671888 containerd[1827]: 2025-01-17 12:25:35.670 [INFO][6103] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.672585 containerd[1827]: time="2025-01-17T12:25:35.671924063Z" level=info msg="TearDown network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\" successfully" Jan 17 12:25:35.672585 containerd[1827]: time="2025-01-17T12:25:35.671954400Z" level=info msg="StopPodSandbox for \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\" returns successfully" Jan 17 12:25:35.672585 containerd[1827]: time="2025-01-17T12:25:35.672505428Z" level=info msg="RemovePodSandbox for \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\"" Jan 17 12:25:35.672585 containerd[1827]: time="2025-01-17T12:25:35.672544680Z" level=info msg="Forcibly stopping sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\"" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.708 [WARNING][6150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9f91e660-ed9d-4c6c-8756-9789d37e6a0c", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"bb5fa410c2414c1127bb8645c07946aefe07a2d84194337c7aa732b1e602040c", Pod:"coredns-6f6b679f8f-b6qr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04a8ab8d33b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.708 [INFO][6150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.708 [INFO][6150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" iface="eth0" netns="" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.708 [INFO][6150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.708 [INFO][6150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.731 [INFO][6167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.731 [INFO][6167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.731 [INFO][6167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.738 [WARNING][6167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.738 [INFO][6167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" HandleID="k8s-pod-network.a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--b6qr5-eth0" Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.739 [INFO][6167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.742535 containerd[1827]: 2025-01-17 12:25:35.741 [INFO][6150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68" Jan 17 12:25:35.743238 containerd[1827]: time="2025-01-17T12:25:35.742575345Z" level=info msg="TearDown network for sandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\" successfully" Jan 17 12:25:35.745628 containerd[1827]: time="2025-01-17T12:25:35.745613407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:25:35.745663 containerd[1827]: time="2025-01-17T12:25:35.745650844Z" level=info msg="RemovePodSandbox \"a72b4f610d31a81f2cea0adcf9e37d2c100f812cb7db0df0dd16fbaa14185a68\" returns successfully" Jan 17 12:25:35.745889 containerd[1827]: time="2025-01-17T12:25:35.745878991Z" level=info msg="StopPodSandbox for \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\"" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.764 [WARNING][6200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"becb30b0-7deb-4ba2-a030-4d72b985fd42", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d", Pod:"calico-apiserver-7bc754cd95-7szqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f6432c2933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.764 [INFO][6200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.764 [INFO][6200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" iface="eth0" netns="" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.764 [INFO][6200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.764 [INFO][6200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.775 [INFO][6214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.775 [INFO][6214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.775 [INFO][6214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.778 [WARNING][6214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.778 [INFO][6214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.779 [INFO][6214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.781137 containerd[1827]: 2025-01-17 12:25:35.780 [INFO][6200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.781137 containerd[1827]: time="2025-01-17T12:25:35.781113352Z" level=info msg="TearDown network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\" successfully" Jan 17 12:25:35.781137 containerd[1827]: time="2025-01-17T12:25:35.781129141Z" level=info msg="StopPodSandbox for \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\" returns successfully" Jan 17 12:25:35.781452 containerd[1827]: time="2025-01-17T12:25:35.781391890Z" level=info msg="RemovePodSandbox for \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\"" Jan 17 12:25:35.781452 containerd[1827]: time="2025-01-17T12:25:35.781407362Z" level=info msg="Forcibly stopping sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\"" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.799 [WARNING][6242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"becb30b0-7deb-4ba2-a030-4d72b985fd42", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"5b4edda77912be03693e59df2f5515f9714a982fc13ac20266031ccc14e8252d", Pod:"calico-apiserver-7bc754cd95-7szqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f6432c2933", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.799 [INFO][6242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.799 [INFO][6242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" iface="eth0" netns="" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.799 [INFO][6242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.799 [INFO][6242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.810 [INFO][6259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.810 [INFO][6259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.810 [INFO][6259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.814 [WARNING][6259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.814 [INFO][6259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" HandleID="k8s-pod-network.a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--7szqj-eth0" Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.815 [INFO][6259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.816774 containerd[1827]: 2025-01-17 12:25:35.816 [INFO][6242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984" Jan 17 12:25:35.817095 containerd[1827]: time="2025-01-17T12:25:35.816772986Z" level=info msg="TearDown network for sandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\" successfully" Jan 17 12:25:35.818162 containerd[1827]: time="2025-01-17T12:25:35.818121020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:25:35.818162 containerd[1827]: time="2025-01-17T12:25:35.818148872Z" level=info msg="RemovePodSandbox \"a82548375167901c57d58fc6e454e0ebc72f23154bb783b918805bcaef740984\" returns successfully" Jan 17 12:25:35.818376 containerd[1827]: time="2025-01-17T12:25:35.818364338Z" level=info msg="StopPodSandbox for \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\"" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.837 [WARNING][6286] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b704db9-0e4b-4e28-94e0-d73625f21ba2", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28", Pod:"csi-node-driver-rqjfk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice849dcd52e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.837 [INFO][6286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.837 [INFO][6286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" iface="eth0" netns="" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.837 [INFO][6286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.837 [INFO][6286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.848 [INFO][6297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.848 [INFO][6297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.848 [INFO][6297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.851 [WARNING][6297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.851 [INFO][6297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.853 [INFO][6297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.854363 containerd[1827]: 2025-01-17 12:25:35.853 [INFO][6286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.854363 containerd[1827]: time="2025-01-17T12:25:35.854318384Z" level=info msg="TearDown network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\" successfully" Jan 17 12:25:35.854363 containerd[1827]: time="2025-01-17T12:25:35.854335397Z" level=info msg="StopPodSandbox for \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\" returns successfully" Jan 17 12:25:35.854751 containerd[1827]: time="2025-01-17T12:25:35.854613981Z" level=info msg="RemovePodSandbox for \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\"" Jan 17 12:25:35.854751 containerd[1827]: time="2025-01-17T12:25:35.854631545Z" level=info msg="Forcibly stopping sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\"" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.874 [WARNING][6324] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b704db9-0e4b-4e28-94e0-d73625f21ba2", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"8cebd41ec1b6a397b8503cce15042a1dead9b8d93b37081384981606cddf4f28", Pod:"csi-node-driver-rqjfk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice849dcd52e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.874 [INFO][6324] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.874 [INFO][6324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" iface="eth0" netns="" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.874 [INFO][6324] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.874 [INFO][6324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.887 [INFO][6335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.887 [INFO][6335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.887 [INFO][6335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.890 [WARNING][6335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.890 [INFO][6335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" HandleID="k8s-pod-network.449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Workload="ci--4081.3.0--a--4c6521d577-k8s-csi--node--driver--rqjfk-eth0" Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.891 [INFO][6335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.893097 containerd[1827]: 2025-01-17 12:25:35.892 [INFO][6324] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f" Jan 17 12:25:35.893386 containerd[1827]: time="2025-01-17T12:25:35.893114258Z" level=info msg="TearDown network for sandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\" successfully" Jan 17 12:25:35.894388 containerd[1827]: time="2025-01-17T12:25:35.894346719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:25:35.894388 containerd[1827]: time="2025-01-17T12:25:35.894373356Z" level=info msg="RemovePodSandbox \"449a82adabcbf8f3637837455b2c5df846dfa852ee65b6a3bf95f9b80612f49f\" returns successfully" Jan 17 12:25:35.894669 containerd[1827]: time="2025-01-17T12:25:35.894619101Z" level=info msg="StopPodSandbox for \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\"" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.912 [WARNING][6364] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb", Pod:"coredns-6f6b679f8f-qbhrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e6902ac5c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.912 [INFO][6364] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.912 [INFO][6364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" iface="eth0" netns="" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.912 [INFO][6364] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.912 [INFO][6364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.922 [INFO][6377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.922 [INFO][6377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.922 [INFO][6377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.926 [WARNING][6377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.926 [INFO][6377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.927 [INFO][6377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.929102 containerd[1827]: 2025-01-17 12:25:35.928 [INFO][6364] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.929394 containerd[1827]: time="2025-01-17T12:25:35.929124763Z" level=info msg="TearDown network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\" successfully" Jan 17 12:25:35.929394 containerd[1827]: time="2025-01-17T12:25:35.929141542Z" level=info msg="StopPodSandbox for \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\" returns successfully" Jan 17 12:25:35.929427 containerd[1827]: time="2025-01-17T12:25:35.929409720Z" level=info msg="RemovePodSandbox for \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\"" Jan 17 12:25:35.929445 containerd[1827]: time="2025-01-17T12:25:35.929425658Z" level=info msg="Forcibly stopping sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\"" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.947 [WARNING][6403] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d83baa3e-9ec1-4a20-b6b3-e4cd2fa744b7", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 42, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"6cfd4e323378f72aefa27dc266dd1654f911b0e48dd0b40ed07b09ae18ada8fb", Pod:"coredns-6f6b679f8f-qbhrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e6902ac5c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.948 [INFO][6403] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.948 [INFO][6403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" iface="eth0" netns="" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.948 [INFO][6403] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.948 [INFO][6403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.959 [INFO][6417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.959 [INFO][6417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.959 [INFO][6417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.962 [WARNING][6417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.962 [INFO][6417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" HandleID="k8s-pod-network.41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Workload="ci--4081.3.0--a--4c6521d577-k8s-coredns--6f6b679f8f--qbhrr-eth0" Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.963 [INFO][6417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:35.965173 containerd[1827]: 2025-01-17 12:25:35.964 [INFO][6403] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730" Jan 17 12:25:35.965485 containerd[1827]: time="2025-01-17T12:25:35.965195723Z" level=info msg="TearDown network for sandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\" successfully" Jan 17 12:25:35.966495 containerd[1827]: time="2025-01-17T12:25:35.966480964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:25:35.966523 containerd[1827]: time="2025-01-17T12:25:35.966512088Z" level=info msg="RemovePodSandbox \"41e8252e30c04872ba0d33dd91c2d7aae088adfe7fda74db0784f25bc6a6a730\" returns successfully" Jan 17 12:25:35.966791 containerd[1827]: time="2025-01-17T12:25:35.966780796Z" level=info msg="StopPodSandbox for \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\"" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.985 [WARNING][6445] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0", GenerateName:"calico-kube-controllers-6dd9787d-", Namespace:"calico-system", SelfLink:"", UID:"fb8a82b8-47f1-4e69-9d26-f80868988f6e", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd9787d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c", Pod:"calico-kube-controllers-6dd9787d-lhszl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali533acc7f5f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.985 [INFO][6445] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.985 [INFO][6445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" iface="eth0" netns="" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.985 [INFO][6445] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.985 [INFO][6445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.996 [INFO][6459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.996 [INFO][6459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:35.996 [INFO][6459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:36.000 [WARNING][6459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:36.000 [INFO][6459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:36.002 [INFO][6459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:36.003535 containerd[1827]: 2025-01-17 12:25:36.002 [INFO][6445] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.003878 containerd[1827]: time="2025-01-17T12:25:36.003561224Z" level=info msg="TearDown network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\" successfully" Jan 17 12:25:36.003878 containerd[1827]: time="2025-01-17T12:25:36.003577810Z" level=info msg="StopPodSandbox for \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\" returns successfully" Jan 17 12:25:36.003878 containerd[1827]: time="2025-01-17T12:25:36.003844966Z" level=info msg="RemovePodSandbox for \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\"" Jan 17 12:25:36.003878 containerd[1827]: time="2025-01-17T12:25:36.003863815Z" level=info msg="Forcibly stopping sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\"" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.029 [WARNING][6489] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0", GenerateName:"calico-kube-controllers-6dd9787d-", Namespace:"calico-system", SelfLink:"", UID:"fb8a82b8-47f1-4e69-9d26-f80868988f6e", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd9787d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"072eb1fbee13c49c444e86754bcac0d17a2a6117764fae5217b3bec38083502c", Pod:"calico-kube-controllers-6dd9787d-lhszl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali533acc7f5f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.030 [INFO][6489] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.030 [INFO][6489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" iface="eth0" netns="" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.030 [INFO][6489] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.030 [INFO][6489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.043 [INFO][6501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.043 [INFO][6501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.043 [INFO][6501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.046 [WARNING][6501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.046 [INFO][6501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" HandleID="k8s-pod-network.ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--kube--controllers--6dd9787d--lhszl-eth0" Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.047 [INFO][6501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:36.048361 containerd[1827]: 2025-01-17 12:25:36.047 [INFO][6489] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae" Jan 17 12:25:36.048667 containerd[1827]: time="2025-01-17T12:25:36.048386265Z" level=info msg="TearDown network for sandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\" successfully" Jan 17 12:25:36.049817 containerd[1827]: time="2025-01-17T12:25:36.049803797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:25:36.049854 containerd[1827]: time="2025-01-17T12:25:36.049828953Z" level=info msg="RemovePodSandbox \"ca5ca01b50324d719a2a196fb9bf06cbf49709297e01765895a358b38c26baae\" returns successfully" Jan 17 12:25:36.050071 containerd[1827]: time="2025-01-17T12:25:36.050060815Z" level=info msg="StopPodSandbox for \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\"" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.083 [WARNING][6531] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb43c142-a013-4f25-834a-935fd8e973a8", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d", Pod:"calico-apiserver-7bc754cd95-fjcl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6496b95da08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.084 [INFO][6531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.084 [INFO][6531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" iface="eth0" netns="" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.084 [INFO][6531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.084 [INFO][6531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.137 [INFO][6544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.137 [INFO][6544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.137 [INFO][6544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.146 [WARNING][6544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.147 [INFO][6544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.148 [INFO][6544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:36.150961 containerd[1827]: 2025-01-17 12:25:36.149 [INFO][6531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.151651 containerd[1827]: time="2025-01-17T12:25:36.150996404Z" level=info msg="TearDown network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\" successfully" Jan 17 12:25:36.151651 containerd[1827]: time="2025-01-17T12:25:36.151033490Z" level=info msg="StopPodSandbox for \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\" returns successfully" Jan 17 12:25:36.151651 containerd[1827]: time="2025-01-17T12:25:36.151417782Z" level=info msg="RemovePodSandbox for \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\"" Jan 17 12:25:36.151651 containerd[1827]: time="2025-01-17T12:25:36.151449557Z" level=info msg="Forcibly stopping sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\"" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.171 [WARNING][6576] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0", GenerateName:"calico-apiserver-7bc754cd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb43c142-a013-4f25-834a-935fd8e973a8", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc754cd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-4c6521d577", ContainerID:"a55dc183ed18ab607bc4712d65b742df4da09a002d882d7441b94c18fdd6e83d", Pod:"calico-apiserver-7bc754cd95-fjcl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6496b95da08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.171 [INFO][6576] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.171 [INFO][6576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" iface="eth0" netns="" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.171 [INFO][6576] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.171 [INFO][6576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.181 [INFO][6587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.181 [INFO][6587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.181 [INFO][6587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.184 [WARNING][6587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.184 [INFO][6587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" HandleID="k8s-pod-network.f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Workload="ci--4081.3.0--a--4c6521d577-k8s-calico--apiserver--7bc754cd95--fjcl8-eth0" Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.185 [INFO][6587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:36.186659 containerd[1827]: 2025-01-17 12:25:36.186 [INFO][6576] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b" Jan 17 12:25:36.186950 containerd[1827]: time="2025-01-17T12:25:36.186684600Z" level=info msg="TearDown network for sandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\" successfully" Jan 17 12:25:36.188012 containerd[1827]: time="2025-01-17T12:25:36.187995093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:25:36.188043 containerd[1827]: time="2025-01-17T12:25:36.188027269Z" level=info msg="RemovePodSandbox \"f83480934699a1b63c435275cbe63f405bacc66364711b48cf1b896938f8953b\" returns successfully" Jan 17 12:26:18.762265 systemd[1]: Started sshd@10-147.75.90.1:22-218.92.0.158:46811.service - OpenSSH per-connection server daemon (218.92.0.158:46811). Jan 17 12:26:19.860860 sshd[6680]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:26:22.160320 sshd[6678]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:26:22.450090 sshd[6681]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:26:23.826445 sshd[6678]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:26:24.117142 sshd[6715]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:26:26.300451 sshd[6678]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:26:26.444786 sshd[6678]: Received disconnect from 218.92.0.158 port 46811:11: [preauth] Jan 17 12:26:26.444786 sshd[6678]: Disconnected from authenticating user root 218.92.0.158 port 46811 [preauth] Jan 17 12:26:26.448309 systemd[1]: sshd@10-147.75.90.1:22-218.92.0.158:46811.service: Deactivated successfully. Jan 17 12:27:28.703837 systemd[1]: Started sshd@11-147.75.90.1:22-14.103.115.54:39796.service - OpenSSH per-connection server daemon (14.103.115.54:39796). Jan 17 12:27:46.080133 systemd[1]: Started sshd@12-147.75.90.1:22-92.255.85.188:61168.service - OpenSSH per-connection server daemon (92.255.85.188:61168). Jan 17 12:27:47.957809 sshd[6898]: Connection closed by authenticating user root 92.255.85.188 port 61168 [preauth] Jan 17 12:27:47.961064 systemd[1]: sshd@12-147.75.90.1:22-92.255.85.188:61168.service: Deactivated successfully. Jan 17 12:27:52.560703 update_engine[1814]: I20250117 12:27:52.560567 1814 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 12:27:52.560703 update_engine[1814]: I20250117 12:27:52.560671 1814 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 12:27:52.561807 update_engine[1814]: I20250117 12:27:52.561085 1814 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 12:27:52.562142 update_engine[1814]: I20250117 12:27:52.562090 1814 omaha_request_params.cc:62] Current group set to lts Jan 17 12:27:52.562440 update_engine[1814]: I20250117 12:27:52.562343 1814 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 12:27:52.562440 update_engine[1814]: I20250117 12:27:52.562374 1814 update_attempter.cc:643] Scheduling an action processor start. Jan 17 12:27:52.562440 update_engine[1814]: I20250117 12:27:52.562412 1814 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:27:52.562772 update_engine[1814]: I20250117 12:27:52.562482 1814 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 12:27:52.562772 update_engine[1814]: I20250117 12:27:52.562637 1814 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:27:52.562772 update_engine[1814]: I20250117 12:27:52.562667 1814 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Jan 17 12:27:52.562772 update_engine[1814]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Jan 17 12:27:52.562772 update_engine[1814]: <os version="Chateau" platform="CoreOS" sp="4081.3.0_x86_64"></os> Jan 17 12:27:52.562772 update_engine[1814]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.3.0" track="lts" bootid="{8637a344-1497-4abd-bff6-368dd2325ed1}" oem="packet" oemversion="0.2.2-r2" alephversion="4081.3.0" machineid="d59702600c28447ab382b3af16d37bc2" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Jan 17 12:27:52.562772 update_engine[1814]: <ping active="1"></ping> Jan 17 12:27:52.562772 update_engine[1814]: <updatecheck></updatecheck> Jan 17 12:27:52.562772 update_engine[1814]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Jan 17 12:27:52.562772 update_engine[1814]: </app> Jan 17 12:27:52.562772 update_engine[1814]: </request> Jan 17 12:27:52.562772 update_engine[1814]: I20250117 12:27:52.562685 1814 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:27:52.563839 locksmithd[1862]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 12:27:52.566311 update_engine[1814]: I20250117 12:27:52.566223 1814 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:27:52.566642 update_engine[1814]: I20250117 12:27:52.566602 1814 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:27:52.567584 update_engine[1814]: E20250117 12:27:52.567538 1814 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:27:52.567584 update_engine[1814]: I20250117 12:27:52.567571 1814 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 12:28:02.552827 update_engine[1814]: I20250117 12:28:02.552659 1814 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:28:02.553937 update_engine[1814]: I20250117 12:28:02.553265 1814 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:28:02.553937 update_engine[1814]: I20250117 12:28:02.553798 1814 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:28:02.554846 update_engine[1814]: E20250117 12:28:02.554726 1814 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:28:02.555062 update_engine[1814]: I20250117 12:28:02.554872 1814 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 12:28:12.553202 update_engine[1814]: I20250117 12:28:12.553033 1814 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:28:12.554201 update_engine[1814]: I20250117 12:28:12.553576 1814 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:28:12.554201 update_engine[1814]: I20250117 12:28:12.554116 1814 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:28:12.555061 update_engine[1814]: E20250117 12:28:12.554926 1814 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:28:12.555248 update_engine[1814]: I20250117 12:28:12.555096 1814 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 12:28:21.854238 systemd[1]: Started sshd@13-147.75.90.1:22-52.224.71.115:50138.service - OpenSSH per-connection server daemon (52.224.71.115:50138). Jan 17 12:28:22.307444 sshd[6997]: Invalid user misha from 52.224.71.115 port 50138 Jan 17 12:28:22.383413 sshd[6997]: Received disconnect from 52.224.71.115 port 50138:11: Bye Bye [preauth] Jan 17 12:28:22.383413 sshd[6997]: Disconnected from invalid user misha 52.224.71.115 port 50138 [preauth] Jan 17 12:28:22.386684 systemd[1]: sshd@13-147.75.90.1:22-52.224.71.115:50138.service: Deactivated successfully. Jan 17 12:28:22.551705 update_engine[1814]: I20250117 12:28:22.551546 1814 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:28:22.552571 update_engine[1814]: I20250117 12:28:22.552151 1814 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:28:22.552727 update_engine[1814]: I20250117 12:28:22.552671 1814 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:28:22.553608 update_engine[1814]: E20250117 12:28:22.553500 1814 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:28:22.553826 update_engine[1814]: I20250117 12:28:22.553637 1814 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:28:22.553826 update_engine[1814]: I20250117 12:28:22.553668 1814 omaha_request_action.cc:617] Omaha request response: Jan 17 12:28:22.554047 update_engine[1814]: E20250117 12:28:22.553828 1814 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 12:28:22.554047 update_engine[1814]: I20250117 12:28:22.553877 1814 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 12:28:22.554047 update_engine[1814]: I20250117 12:28:22.553895 1814 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:28:22.554047 update_engine[1814]: I20250117 12:28:22.553910 1814 update_attempter.cc:306] Processing Done. Jan 17 12:28:22.554047 update_engine[1814]: E20250117 12:28:22.553942 1814 update_attempter.cc:619] Update failed. Jan 17 12:28:22.554047 update_engine[1814]: I20250117 12:28:22.553959 1814 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 12:28:22.554047 update_engine[1814]: I20250117 12:28:22.553974 1814 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 12:28:22.554047 update_engine[1814]: I20250117 12:28:22.553990 1814 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 12:28:22.554743 update_engine[1814]: I20250117 12:28:22.554201 1814 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 12:28:22.554743 update_engine[1814]: I20250117 12:28:22.554271 1814 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 12:28:22.554743 update_engine[1814]: I20250117 12:28:22.554291 1814 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Jan 17 12:28:22.554743 update_engine[1814]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Jan 17 12:28:22.554743 update_engine[1814]: <os version="Chateau" platform="CoreOS" sp="4081.3.0_x86_64"></os> Jan 17 12:28:22.554743 update_engine[1814]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.3.0" track="lts" bootid="{8637a344-1497-4abd-bff6-368dd2325ed1}" oem="packet" oemversion="0.2.2-r2" alephversion="4081.3.0" machineid="d59702600c28447ab382b3af16d37bc2" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Jan 17 12:28:22.554743 update_engine[1814]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Jan 17 12:28:22.554743 update_engine[1814]: </app> Jan 17 12:28:22.554743 update_engine[1814]: </request> Jan 17 12:28:22.554743 update_engine[1814]: I20250117 12:28:22.554308 1814 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 12:28:22.554743 update_engine[1814]: I20250117 12:28:22.554702 1814 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 12:28:22.555655 update_engine[1814]: I20250117 12:28:22.555141 1814 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 12:28:22.555756 locksmithd[1862]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 12:28:22.556411 update_engine[1814]: E20250117 12:28:22.555836 1814 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 12:28:22.556411 update_engine[1814]: I20250117 12:28:22.555966 1814 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 12:28:22.556411 update_engine[1814]: I20250117 12:28:22.555997 1814 omaha_request_action.cc:617] Omaha request response: Jan 17 12:28:22.556411 update_engine[1814]: I20250117 12:28:22.556051 1814 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:28:22.556411 update_engine[1814]: I20250117 12:28:22.556066 1814 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 12:28:22.556411 update_engine[1814]: I20250117 12:28:22.556082 1814 update_attempter.cc:306] Processing Done. Jan 17 12:28:22.556411 update_engine[1814]: I20250117 12:28:22.556100 1814 update_attempter.cc:310] Error event sent. Jan 17 12:28:22.556411 update_engine[1814]: I20250117 12:28:22.556126 1814 update_check_scheduler.cc:74] Next update check in 42m25s Jan 17 12:28:22.557125 locksmithd[1862]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 12:28:25.633340 systemd[1]: Started sshd@14-147.75.90.1:22-218.92.0.158:42491.service - OpenSSH per-connection server daemon (218.92.0.158:42491). Jan 17 12:28:26.740296 sshd[7032]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:28:29.475502 sshd[7030]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:28:29.769303 sshd[7033]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:28:31.581404 sshd[7030]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:28:31.872316 sshd[7034]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:28:34.292215 sshd[7030]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:28:34.437768 sshd[7030]: Received disconnect from 218.92.0.158 port 42491:11: [preauth] Jan 17 12:28:34.437768 sshd[7030]: Disconnected from authenticating user root 218.92.0.158 port 42491 [preauth] Jan 17 12:28:34.441168 systemd[1]: sshd@14-147.75.90.1:22-218.92.0.158:42491.service: Deactivated successfully. Jan 17 12:29:12.449283 systemd[1]: Started sshd@15-147.75.90.1:22-112.217.207.28:54932.service - OpenSSH per-connection server daemon (112.217.207.28:54932). Jan 17 12:29:13.410288 sshd[7126]: Received disconnect from 112.217.207.28 port 54932:11: Bye Bye [preauth] Jan 17 12:29:13.410288 sshd[7126]: Disconnected from authenticating user root 112.217.207.28 port 54932 [preauth] Jan 17 12:29:13.413543 systemd[1]: sshd@15-147.75.90.1:22-112.217.207.28:54932.service: Deactivated successfully. Jan 17 12:29:28.712940 systemd[1]: sshd@11-147.75.90.1:22-14.103.115.54:39796.service: Deactivated successfully. Jan 17 12:29:55.081996 systemd[1]: Started sshd@16-147.75.90.1:22-143.198.186.212:35284.service - OpenSSH per-connection server daemon (143.198.186.212:35284). Jan 17 12:29:55.505197 sshd[7235]: Invalid user nb from 143.198.186.212 port 35284 Jan 17 12:29:55.573143 sshd[7235]: Received disconnect from 143.198.186.212 port 35284:11: Bye Bye [preauth] Jan 17 12:29:55.573143 sshd[7235]: Disconnected from invalid user nb 143.198.186.212 port 35284 [preauth] Jan 17 12:29:55.573917 systemd[1]: sshd@16-147.75.90.1:22-143.198.186.212:35284.service: Deactivated successfully. Jan 17 12:30:16.204310 systemd[1]: Started sshd@17-147.75.90.1:22-185.74.4.17:49943.service - OpenSSH per-connection server daemon (185.74.4.17:49943). Jan 17 12:30:17.500787 sshd[7282]: Invalid user nextcloud from 185.74.4.17 port 49943 Jan 17 12:30:17.746497 sshd[7282]: Received disconnect from 185.74.4.17 port 49943:11: Bye Bye [preauth] Jan 17 12:30:17.746497 sshd[7282]: Disconnected from invalid user nextcloud 185.74.4.17 port 49943 [preauth] Jan 17 12:30:17.749806 systemd[1]: sshd@17-147.75.90.1:22-185.74.4.17:49943.service: Deactivated successfully. Jan 17 12:30:31.497308 systemd[1]: Started sshd@18-147.75.90.1:22-111.238.174.6:41940.service - OpenSSH per-connection server daemon (111.238.174.6:41940). Jan 17 12:30:32.123450 sshd[7318]: Invalid user debug from 111.238.174.6 port 41940 Jan 17 12:30:32.240654 sshd[7318]: Received disconnect from 111.238.174.6 port 41940:11: Bye Bye [preauth] Jan 17 12:30:32.240654 sshd[7318]: Disconnected from invalid user debug 111.238.174.6 port 41940 [preauth] Jan 17 12:30:32.243929 systemd[1]: sshd@18-147.75.90.1:22-111.238.174.6:41940.service: Deactivated successfully. Jan 17 12:30:34.148290 systemd[1]: Started sshd@19-147.75.90.1:22-218.92.0.158:60157.service - OpenSSH per-connection server daemon (218.92.0.158:60157). Jan 17 12:30:35.379086 sshd[7347]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:30:37.167341 systemd[1]: Started sshd@20-147.75.90.1:22-45.238.232.3:33276.service - OpenSSH per-connection server daemon (45.238.232.3:33276). Jan 17 12:30:37.488098 sshd[7325]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:30:37.824442 sshd[7353]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:30:38.146164 systemd[1]: Started sshd@21-147.75.90.1:22-147.75.109.163:40608.service - OpenSSH per-connection server daemon (147.75.109.163:40608). Jan 17 12:30:38.205242 sshd[7358]: Accepted publickey for core from 147.75.109.163 port 40608 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:30:38.206755 sshd[7358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:30:38.212126 systemd-logind[1809]: New session 12 of user core. Jan 17 12:30:38.231404 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:30:38.282982 sshd[7351]: Invalid user vagrant from 45.238.232.3 port 33276 Jan 17 12:30:38.381447 sshd[7358]: pam_unix(sshd:session): session closed for user core Jan 17 12:30:38.383632 systemd[1]: sshd@21-147.75.90.1:22-147.75.109.163:40608.service: Deactivated successfully. Jan 17 12:30:38.384926 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:30:38.385937 systemd-logind[1809]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:30:38.386892 systemd-logind[1809]: Removed session 12. Jan 17 12:30:38.477627 sshd[7351]: Received disconnect from 45.238.232.3 port 33276:11: Bye Bye [preauth] Jan 17 12:30:38.477627 sshd[7351]: Disconnected from invalid user vagrant 45.238.232.3 port 33276 [preauth] Jan 17 12:30:38.480847 systemd[1]: sshd@20-147.75.90.1:22-45.238.232.3:33276.service: Deactivated successfully. Jan 17 12:30:39.877318 sshd[7325]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:30:40.214896 sshd[7389]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Jan 17 12:30:42.011314 sshd[7325]: PAM: Permission denied for root from 218.92.0.158 Jan 17 12:30:42.179282 sshd[7325]: Received disconnect from 218.92.0.158 port 60157:11: [preauth] Jan 17 12:30:42.179282 sshd[7325]: Disconnected from authenticating user root 218.92.0.158 port 60157 [preauth] Jan 17 12:30:42.182989 systemd[1]: sshd@19-147.75.90.1:22-218.92.0.158:60157.service: Deactivated successfully. Jan 17 12:30:43.403473 systemd[1]: Started sshd@22-147.75.90.1:22-147.75.109.163:40616.service - OpenSSH per-connection server daemon (147.75.109.163:40616). Jan 17 12:30:43.437199 sshd[7395]: Accepted publickey for core from 147.75.109.163 port 40616 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:30:43.438061 sshd[7395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:30:43.441083 systemd-logind[1809]: New session 13 of user core. Jan 17 12:30:43.461283 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:30:43.547748 sshd[7395]: pam_unix(sshd:session): session closed for user core Jan 17 12:30:43.549402 systemd[1]: sshd@22-147.75.90.1:22-147.75.109.163:40616.service: Deactivated successfully. Jan 17 12:30:43.550370 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:30:43.551049 systemd-logind[1809]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:30:43.551626 systemd-logind[1809]: Removed session 13. Jan 17 12:30:48.569849 systemd[1]: Started sshd@23-147.75.90.1:22-147.75.109.163:54732.service - OpenSSH per-connection server daemon (147.75.109.163:54732). Jan 17 12:30:48.600301 sshd[7421]: Accepted publickey for core from 147.75.109.163 port 54732 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:30:48.601158 sshd[7421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:30:48.604316 systemd-logind[1809]: New session 14 of user core. Jan 17 12:30:48.617223 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:30:48.703916 sshd[7421]: pam_unix(sshd:session): session closed for user core Jan 17 12:30:48.724778 systemd[1]: sshd@23-147.75.90.1:22-147.75.109.163:54732.service: Deactivated successfully. Jan 17 12:30:48.725678 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:30:48.726504 systemd-logind[1809]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:30:48.727291 systemd[1]: Started sshd@24-147.75.90.1:22-147.75.109.163:54740.service - OpenSSH per-connection server daemon (147.75.109.163:54740). Jan 17 12:30:48.727925 systemd-logind[1809]: Removed session 14. Jan 17 12:30:48.758405 sshd[7448]: Accepted publickey for core from 147.75.109.163 port 54740 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:30:48.759375 sshd[7448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:30:48.762884 systemd-logind[1809]: New session 15 of user core. Jan 17 12:30:48.779450 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:30:48.948439 sshd[7448]: pam_unix(sshd:session): session closed for user core Jan 17 12:30:48.962225 systemd[1]: sshd@24-147.75.90.1:22-147.75.109.163:54740.service: Deactivated successfully. Jan 17 12:30:48.963775 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:30:48.964930 systemd-logind[1809]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:30:48.966088 systemd[1]: Started sshd@25-147.75.90.1:22-147.75.109.163:54746.service - OpenSSH per-connection server daemon (147.75.109.163:54746). Jan 17 12:30:48.966836 systemd-logind[1809]: Removed session 15. Jan 17 12:30:49.005957 sshd[7472]: Accepted publickey for core from 147.75.109.163 port 54746 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:30:49.007211 sshd[7472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:30:49.011443 systemd-logind[1809]: New session 16 of user core. Jan 17 12:30:49.030440 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:30:49.176608 sshd[7472]: pam_unix(sshd:session): session closed for user core Jan 17 12:30:49.178336 systemd[1]: sshd@25-147.75.90.1:22-147.75.109.163:54746.service: Deactivated successfully. Jan 17 12:30:49.179322 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:30:49.180032 systemd-logind[1809]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:30:49.180808 systemd-logind[1809]: Removed session 16. Jan 17 12:30:54.215703 systemd[1]: Started sshd@26-147.75.90.1:22-147.75.109.163:54752.service - OpenSSH per-connection server daemon (147.75.109.163:54752). Jan 17 12:30:54.271700 sshd[7527]: Accepted publickey for core from 147.75.109.163 port 54752 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:30:54.275051 sshd[7527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:30:54.286201 systemd-logind[1809]: New session 17 of user core. Jan 17 12:30:54.311414 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:30:54.404610 sshd[7527]: pam_unix(sshd:session): session closed for user core Jan 17 12:30:54.406786 systemd[1]: sshd@26-147.75.90.1:22-147.75.109.163:54752.service: Deactivated successfully. Jan 17 12:30:54.407727 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:30:54.408091 systemd-logind[1809]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:30:54.408612 systemd-logind[1809]: Removed session 17. Jan 17 12:30:59.443356 systemd[1]: Started sshd@27-147.75.90.1:22-147.75.109.163:33326.service - OpenSSH per-connection server daemon (147.75.109.163:33326). Jan 17 12:30:59.471471 sshd[7578]: Accepted publickey for core from 147.75.109.163 port 33326 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:30:59.472280 sshd[7578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:30:59.475086 systemd-logind[1809]: New session 18 of user core. Jan 17 12:30:59.484273 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:30:59.570383 sshd[7578]: pam_unix(sshd:session): session closed for user core Jan 17 12:30:59.571871 systemd[1]: sshd@27-147.75.90.1:22-147.75.109.163:33326.service: Deactivated successfully. Jan 17 12:30:59.572812 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:30:59.573555 systemd-logind[1809]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:30:59.574298 systemd-logind[1809]: Removed session 18. Jan 17 12:31:03.616131 systemd[1]: Started sshd@28-147.75.90.1:22-101.126.71.100:43630.service - OpenSSH per-connection server daemon (101.126.71.100:43630). Jan 17 12:31:04.611250 systemd[1]: Started sshd@29-147.75.90.1:22-147.75.109.163:33340.service - OpenSSH per-connection server daemon (147.75.109.163:33340). Jan 17 12:31:04.640621 sshd[7625]: Accepted publickey for core from 147.75.109.163 port 33340 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:04.641868 sshd[7625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:04.646449 systemd-logind[1809]: New session 19 of user core. Jan 17 12:31:04.663454 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:31:04.758010 sshd[7625]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:04.759781 systemd[1]: sshd@29-147.75.90.1:22-147.75.109.163:33340.service: Deactivated successfully. Jan 17 12:31:04.760788 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:31:04.761621 systemd-logind[1809]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:31:04.762379 systemd-logind[1809]: Removed session 19. Jan 17 12:31:09.773649 systemd[1]: Started sshd@30-147.75.90.1:22-147.75.109.163:36716.service - OpenSSH per-connection server daemon (147.75.109.163:36716). Jan 17 12:31:09.807143 sshd[7651]: Accepted publickey for core from 147.75.109.163 port 36716 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:09.808177 sshd[7651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:09.811341 systemd-logind[1809]: New session 20 of user core. Jan 17 12:31:09.826195 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:31:09.937463 sshd[7651]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:09.953182 systemd[1]: sshd@30-147.75.90.1:22-147.75.109.163:36716.service: Deactivated successfully. Jan 17 12:31:09.954374 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:31:09.955438 systemd-logind[1809]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:31:09.956425 systemd[1]: Started sshd@31-147.75.90.1:22-147.75.109.163:36720.service - OpenSSH per-connection server daemon (147.75.109.163:36720). Jan 17 12:31:09.957125 systemd-logind[1809]: Removed session 20. Jan 17 12:31:09.995602 sshd[7677]: Accepted publickey for core from 147.75.109.163 port 36720 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:09.996952 sshd[7677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:10.002033 systemd-logind[1809]: New session 21 of user core. Jan 17 12:31:10.014385 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:31:10.289369 sshd[7677]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:10.317286 systemd[1]: sshd@31-147.75.90.1:22-147.75.109.163:36720.service: Deactivated successfully. Jan 17 12:31:10.318084 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:31:10.318812 systemd-logind[1809]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:31:10.319466 systemd[1]: Started sshd@32-147.75.90.1:22-147.75.109.163:36726.service - OpenSSH per-connection server daemon (147.75.109.163:36726). Jan 17 12:31:10.319935 systemd-logind[1809]: Removed session 21. Jan 17 12:31:10.351800 sshd[7701]: Accepted publickey for core from 147.75.109.163 port 36726 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:10.355283 sshd[7701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:10.366968 systemd-logind[1809]: New session 22 of user core. Jan 17 12:31:10.395516 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:31:11.708620 sshd[7701]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:11.722726 systemd[1]: sshd@32-147.75.90.1:22-147.75.109.163:36726.service: Deactivated successfully. Jan 17 12:31:11.723581 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:31:11.724287 systemd-logind[1809]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:31:11.724930 systemd[1]: Started sshd@33-147.75.90.1:22-147.75.109.163:36732.service - OpenSSH per-connection server daemon (147.75.109.163:36732). Jan 17 12:31:11.725335 systemd-logind[1809]: Removed session 22. Jan 17 12:31:11.754648 sshd[7735]: Accepted publickey for core from 147.75.109.163 port 36732 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:11.755361 sshd[7735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:11.758080 systemd-logind[1809]: New session 23 of user core. Jan 17 12:31:11.767350 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:31:11.941024 sshd[7735]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:11.948696 systemd[1]: sshd@33-147.75.90.1:22-147.75.109.163:36732.service: Deactivated successfully. Jan 17 12:31:11.949469 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:31:11.950098 systemd-logind[1809]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:31:11.950705 systemd[1]: Started sshd@34-147.75.90.1:22-147.75.109.163:36748.service - OpenSSH per-connection server daemon (147.75.109.163:36748). Jan 17 12:31:11.951148 systemd-logind[1809]: Removed session 23. Jan 17 12:31:11.980577 sshd[7760]: Accepted publickey for core from 147.75.109.163 port 36748 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:11.981386 sshd[7760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:11.983781 systemd-logind[1809]: New session 24 of user core. Jan 17 12:31:11.998161 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:31:12.124207 sshd[7760]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:12.126160 systemd[1]: sshd@34-147.75.90.1:22-147.75.109.163:36748.service: Deactivated successfully. Jan 17 12:31:12.127031 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:31:12.127455 systemd-logind[1809]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:31:12.127901 systemd-logind[1809]: Removed session 24. Jan 17 12:31:17.171427 systemd[1]: Started sshd@35-147.75.90.1:22-147.75.109.163:36760.service - OpenSSH per-connection server daemon (147.75.109.163:36760). Jan 17 12:31:17.205133 sshd[7793]: Accepted publickey for core from 147.75.109.163 port 36760 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:17.207475 sshd[7793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:17.217802 systemd-logind[1809]: New session 25 of user core. Jan 17 12:31:17.230489 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:31:17.325797 sshd[7793]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:17.327410 systemd[1]: sshd@35-147.75.90.1:22-147.75.109.163:36760.service: Deactivated successfully. Jan 17 12:31:17.328363 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:31:17.328989 systemd-logind[1809]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:31:17.329724 systemd-logind[1809]: Removed session 25. Jan 17 12:31:22.350178 systemd[1]: Started sshd@36-147.75.90.1:22-147.75.109.163:45306.service - OpenSSH per-connection server daemon (147.75.109.163:45306). Jan 17 12:31:22.378467 sshd[7824]: Accepted publickey for core from 147.75.109.163 port 45306 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:22.379347 sshd[7824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:22.382721 systemd-logind[1809]: New session 26 of user core. Jan 17 12:31:22.390270 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:31:22.474183 sshd[7824]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:22.475858 systemd[1]: sshd@36-147.75.90.1:22-147.75.109.163:45306.service: Deactivated successfully. Jan 17 12:31:22.476808 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:31:22.477545 systemd-logind[1809]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:31:22.478044 systemd-logind[1809]: Removed session 26. Jan 17 12:31:24.594304 systemd[1]: Started sshd@37-147.75.90.1:22-112.217.207.28:43502.service - OpenSSH per-connection server daemon (112.217.207.28:43502). Jan 17 12:31:25.441241 sshd[7888]: Invalid user mohsen from 112.217.207.28 port 43502 Jan 17 12:31:25.593117 sshd[7888]: Received disconnect from 112.217.207.28 port 43502:11: Bye Bye [preauth] Jan 17 12:31:25.593117 sshd[7888]: Disconnected from invalid user mohsen 112.217.207.28 port 43502 [preauth] Jan 17 12:31:25.596417 systemd[1]: sshd@37-147.75.90.1:22-112.217.207.28:43502.service: Deactivated successfully. Jan 17 12:31:27.495939 systemd[1]: Started sshd@38-147.75.90.1:22-147.75.109.163:49366.service - OpenSSH per-connection server daemon (147.75.109.163:49366). Jan 17 12:31:27.526272 sshd[7893]: Accepted publickey for core from 147.75.109.163 port 49366 ssh2: RSA SHA256:wSKxRpuVhY0g5gcbcieF8y+j08h6yVl/QyXzGjSISA8 Jan 17 12:31:27.527157 sshd[7893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:31:27.530559 systemd-logind[1809]: New session 27 of user core. Jan 17 12:31:27.539292 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:31:27.617400 sshd[7893]: pam_unix(sshd:session): session closed for user core Jan 17 12:31:27.618941 systemd[1]: sshd@38-147.75.90.1:22-147.75.109.163:49366.service: Deactivated successfully. Jan 17 12:31:27.619862 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:31:27.620586 systemd-logind[1809]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:31:27.621129 systemd-logind[1809]: Removed session 27.