Apr 30 04:40:53.024807 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 04:40:53.024822 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 04:40:53.024829 kernel: BIOS-provided physical RAM map: Apr 30 04:40:53.024833 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 30 04:40:53.024837 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 30 04:40:53.024841 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 30 04:40:53.024846 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 30 04:40:53.024850 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 30 04:40:53.024854 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Apr 30 04:40:53.024858 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Apr 30 04:40:53.024862 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Apr 30 04:40:53.024867 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Apr 30 04:40:53.024871 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Apr 30 04:40:53.024875 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Apr 30 04:40:53.024880 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Apr 30 04:40:53.024885 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Apr 30 04:40:53.024890 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 30 04:40:53.024895 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 30 04:40:53.024899 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 04:40:53.024904 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 30 04:40:53.024908 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 30 04:40:53.024913 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 30 04:40:53.024917 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 30 04:40:53.024922 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 30 04:40:53.024926 kernel: NX (Execute Disable) protection: active Apr 30 04:40:53.024931 kernel: APIC: Static calls initialized Apr 30 04:40:53.024936 kernel: SMBIOS 3.2.1 present. Apr 30 04:40:53.024940 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Apr 30 04:40:53.024946 kernel: tsc: Detected 3400.000 MHz processor Apr 30 04:40:53.024950 kernel: tsc: Detected 3399.906 MHz TSC Apr 30 04:40:53.024955 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 04:40:53.024960 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 04:40:53.024965 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 30 04:40:53.024970 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Apr 30 04:40:53.024974 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 04:40:53.024979 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 30 04:40:53.024984 kernel: Using GB pages for direct mapping Apr 30 04:40:53.024989 kernel: ACPI: Early table checksum verification disabled Apr 30 04:40:53.024994 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 30 04:40:53.024999 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 30 04:40:53.025005 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Apr 30 04:40:53.025010 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 30 04:40:53.025015 kernel: ACPI: FACS 0x000000008C66CF80 000040 Apr 30 04:40:53.025020 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Apr 30 04:40:53.025026 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Apr 30 04:40:53.025031 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 30 04:40:53.025036 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 30 04:40:53.025041 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 30 04:40:53.025046 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 30 04:40:53.025051 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 30 04:40:53.025056 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 30 04:40:53.025062 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025067 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 30 04:40:53.025072 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 30 04:40:53.025077 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025081 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025086 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 30 04:40:53.025091 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 30 04:40:53.025097 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025102 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025108 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 30 04:40:53.025112 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Apr 30 04:40:53.025117 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 30 04:40:53.025122 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 30 04:40:53.025127 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 30 04:40:53.025132 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 30 04:40:53.025137 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 30 04:40:53.025142 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 30 04:40:53.025148 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 30 04:40:53.025153 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 30 04:40:53.025158 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 30 04:40:53.025163 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Apr 30 04:40:53.025168 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Apr 30 04:40:53.025173 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Apr 30 04:40:53.025178 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Apr 30 04:40:53.025183 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Apr 30 04:40:53.025188 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Apr 30 04:40:53.025194 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Apr 30 04:40:53.025199 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Apr 30 04:40:53.025203 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Apr 30 04:40:53.025208 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Apr 30 04:40:53.025213 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Apr 30 04:40:53.025218 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Apr 30 04:40:53.025223 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Apr 30 04:40:53.025228 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Apr 30 04:40:53.025233 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Apr 30 04:40:53.025239 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Apr 30 04:40:53.025244 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Apr 30 04:40:53.025249 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Apr 30 04:40:53.025254 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Apr 30 04:40:53.025262 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Apr 30 04:40:53.025267 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Apr 30 04:40:53.025290 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Apr 30 04:40:53.025296 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Apr 30 04:40:53.025301 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Apr 30 04:40:53.025321 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Apr 30 04:40:53.025326 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Apr 30 04:40:53.025331 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Apr 30 04:40:53.025336 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Apr 30 04:40:53.025341 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Apr 30 04:40:53.025346 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Apr 30 04:40:53.025351 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Apr 30 04:40:53.025356 kernel: No NUMA configuration found Apr 30 04:40:53.025361 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 30 04:40:53.025366 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 30 04:40:53.025372 kernel: Zone ranges: Apr 30 04:40:53.025377 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 04:40:53.025382 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 04:40:53.025387 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 30 04:40:53.025392 kernel: Movable zone start for each node Apr 30 04:40:53.025397 kernel: Early memory node ranges Apr 30 04:40:53.025401 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 30 04:40:53.025406 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 30 04:40:53.025411 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Apr 30 04:40:53.025417 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Apr 30 04:40:53.025422 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Apr 30 04:40:53.025427 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 30 04:40:53.025432 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 30 04:40:53.025440 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 30 04:40:53.025447 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 04:40:53.025452 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 30 04:40:53.025457 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 30 04:40:53.025463 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 30 04:40:53.025469 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 30 04:40:53.025474 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Apr 30 04:40:53.025479 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 30 04:40:53.025485 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 30 04:40:53.025490 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 30 04:40:53.025495 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 30 04:40:53.025501 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 30 04:40:53.025506 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 30 04:40:53.025512 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 30 04:40:53.025517 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 30 04:40:53.025523 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 30 04:40:53.025528 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 30 04:40:53.025533 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 30 04:40:53.025538 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 30 04:40:53.025544 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 30 04:40:53.025549 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 30 04:40:53.025554 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 30 04:40:53.025560 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 30 04:40:53.025566 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 30 04:40:53.025571 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 30 04:40:53.025577 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 30 04:40:53.025582 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 30 04:40:53.025587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 04:40:53.025592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 04:40:53.025598 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 04:40:53.025603 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 04:40:53.025609 kernel: TSC deadline timer available Apr 30 04:40:53.025615 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 30 04:40:53.025620 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 30 04:40:53.025625 kernel: Booting paravirtualized kernel on bare hardware Apr 30 04:40:53.025631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 04:40:53.025636 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 04:40:53.025642 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 04:40:53.025647 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 04:40:53.025652 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 04:40:53.025659 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 04:40:53.025664 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 04:40:53.025670 kernel: random: crng init done Apr 30 04:40:53.025675 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 30 04:40:53.025680 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 30 04:40:53.025686 kernel: Fallback order for Node 0: 0 Apr 30 04:40:53.025691 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Apr 30 04:40:53.025696 kernel: Policy zone: Normal Apr 30 04:40:53.025702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 04:40:53.025708 kernel: software IO TLB: area num 16. Apr 30 04:40:53.025713 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 732416K reserved, 0K cma-reserved) Apr 30 04:40:53.025719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 04:40:53.025724 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 04:40:53.025730 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 04:40:53.025735 kernel: Dynamic Preempt: voluntary Apr 30 04:40:53.025740 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 04:40:53.025746 kernel: rcu: RCU event tracing is enabled. Apr 30 04:40:53.025752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 04:40:53.025758 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 04:40:53.025763 kernel: Rude variant of Tasks RCU enabled. Apr 30 04:40:53.025768 kernel: Tracing variant of Tasks RCU enabled. Apr 30 04:40:53.025774 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 04:40:53.025779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 04:40:53.025784 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 30 04:40:53.025790 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 04:40:53.025795 kernel: Console: colour dummy device 80x25 Apr 30 04:40:53.025800 kernel: printk: console [tty0] enabled Apr 30 04:40:53.025806 kernel: printk: console [ttyS1] enabled Apr 30 04:40:53.025812 kernel: ACPI: Core revision 20230628 Apr 30 04:40:53.025817 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Apr 30 04:40:53.025822 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 04:40:53.025828 kernel: DMAR: Host address width 39 Apr 30 04:40:53.025833 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 30 04:40:53.025838 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 30 04:40:53.025844 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Apr 30 04:40:53.025849 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 30 04:40:53.025855 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 30 04:40:53.025861 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 30 04:40:53.025866 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 30 04:40:53.025871 kernel: x2apic enabled Apr 30 04:40:53.025877 kernel: APIC: Switched APIC routing to: cluster x2apic Apr 30 04:40:53.025882 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 30 04:40:53.025888 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 30 04:40:53.025893 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 30 04:40:53.025898 kernel: process: using mwait in idle threads Apr 30 04:40:53.025904 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 04:40:53.025910 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 04:40:53.025915 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 04:40:53.025920 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 04:40:53.025926 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 04:40:53.025931 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 30 04:40:53.025936 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 04:40:53.025941 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 30 04:40:53.025946 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 30 04:40:53.025952 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 04:40:53.025957 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 04:40:53.025963 kernel: TAA: Mitigation: TSX disabled Apr 30 04:40:53.025968 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 30 04:40:53.025974 kernel: SRBDS: Mitigation: Microcode Apr 30 04:40:53.025979 kernel: GDS: Mitigation: Microcode Apr 30 04:40:53.025984 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 04:40:53.025990 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 04:40:53.025995 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 04:40:53.026000 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 04:40:53.026005 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 04:40:53.026011 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 04:40:53.026016 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 04:40:53.026022 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 04:40:53.026027 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 30 04:40:53.026033 kernel: Freeing SMP alternatives memory: 32K Apr 30 04:40:53.026038 kernel: pid_max: default: 32768 minimum: 301 Apr 30 04:40:53.026043 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 04:40:53.026048 kernel: landlock: Up and running. Apr 30 04:40:53.026054 kernel: SELinux: Initializing. Apr 30 04:40:53.026059 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.026064 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.026070 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 30 04:40:53.026075 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 04:40:53.026081 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 04:40:53.026087 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 04:40:53.026092 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 30 04:40:53.026098 kernel: ... version: 4 Apr 30 04:40:53.026103 kernel: ... bit width: 48 Apr 30 04:40:53.026108 kernel: ... generic registers: 4 Apr 30 04:40:53.026113 kernel: ... value mask: 0000ffffffffffff Apr 30 04:40:53.026119 kernel: ... max period: 00007fffffffffff Apr 30 04:40:53.026124 kernel: ... fixed-purpose events: 3 Apr 30 04:40:53.026130 kernel: ... event mask: 000000070000000f Apr 30 04:40:53.026136 kernel: signal: max sigframe size: 2032 Apr 30 04:40:53.026141 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 30 04:40:53.026146 kernel: rcu: Hierarchical SRCU implementation. Apr 30 04:40:53.026152 kernel: rcu: Max phase no-delay instances is 400. Apr 30 04:40:53.026157 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 30 04:40:53.026162 kernel: smp: Bringing up secondary CPUs ... Apr 30 04:40:53.026168 kernel: smpboot: x86: Booting SMP configuration: Apr 30 04:40:53.026173 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Apr 30 04:40:53.026180 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 04:40:53.026185 kernel: smp: Brought up 1 node, 16 CPUs Apr 30 04:40:53.026190 kernel: smpboot: Max logical packages: 1 Apr 30 04:40:53.026196 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 30 04:40:53.026201 kernel: devtmpfs: initialized Apr 30 04:40:53.026206 kernel: x86/mm: Memory block size: 128MB Apr 30 04:40:53.026212 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Apr 30 04:40:53.026217 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Apr 30 04:40:53.026222 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 04:40:53.026229 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 04:40:53.026234 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 04:40:53.026239 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 04:40:53.026245 kernel: audit: initializing netlink subsys (disabled) Apr 30 04:40:53.026250 kernel: audit: type=2000 audit(1745988047.039:1): state=initialized audit_enabled=0 res=1 Apr 30 04:40:53.026257 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 04:40:53.026262 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 04:40:53.026268 kernel: cpuidle: using governor menu Apr 30 04:40:53.026293 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 04:40:53.026299 kernel: dca service started, version 1.12.1 Apr 30 04:40:53.026319 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 04:40:53.026324 kernel: PCI: Using configuration type 1 for base access Apr 30 04:40:53.026330 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 30 04:40:53.026335 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 04:40:53.026340 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 04:40:53.026346 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 04:40:53.026351 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 04:40:53.026357 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 04:40:53.026362 kernel: ACPI: Added _OSI(Module Device) Apr 30 04:40:53.026368 kernel: ACPI: Added _OSI(Processor Device) Apr 30 04:40:53.026373 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 04:40:53.026378 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 04:40:53.026384 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 30 04:40:53.026389 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026394 kernel: ACPI: SSDT 0xFFFF957280E5AC00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 30 04:40:53.026400 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026405 kernel: ACPI: SSDT 0xFFFF957281E28000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 30 04:40:53.026411 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026417 kernel: ACPI: SSDT 0xFFFF957280E05200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 30 04:40:53.026422 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026427 kernel: ACPI: SSDT 0xFFFF957281E2B800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 30 04:40:53.026432 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026438 kernel: ACPI: SSDT 0xFFFF957280E70000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 30 04:40:53.026443 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026448 kernel: ACPI: SSDT 0xFFFF957280E5D800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 30 04:40:53.026454 kernel: ACPI: _OSC evaluated successfully for all CPUs Apr 30 04:40:53.026460 kernel: ACPI: Interpreter enabled Apr 30 04:40:53.026465 kernel: ACPI: PM: (supports S0 S5) Apr 30 04:40:53.026470 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 04:40:53.026476 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 30 04:40:53.026481 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 30 04:40:53.026486 kernel: HEST: Table parsing has been initialized. Apr 30 04:40:53.026491 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 30 04:40:53.026497 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 04:40:53.026502 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 04:40:53.026508 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 30 04:40:53.026514 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Apr 30 04:40:53.026519 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Apr 30 04:40:53.026525 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Apr 30 04:40:53.026530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Apr 30 04:40:53.026535 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Apr 30 04:40:53.026541 kernel: ACPI: \_TZ_.FN00: New power resource Apr 30 04:40:53.026546 kernel: ACPI: \_TZ_.FN01: New power resource Apr 30 04:40:53.026551 kernel: ACPI: \_TZ_.FN02: New power resource Apr 30 04:40:53.026557 kernel: ACPI: \_TZ_.FN03: New power resource Apr 30 04:40:53.026563 kernel: ACPI: \_TZ_.FN04: New power resource Apr 30 04:40:53.026568 kernel: ACPI: \PIN_: New power resource Apr 30 04:40:53.026573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 30 04:40:53.026646 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 04:40:53.026700 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 30 04:40:53.026747 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 30 04:40:53.026755 kernel: PCI host bridge to bus 0000:00 Apr 30 04:40:53.026806 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 04:40:53.026850 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 04:40:53.026892 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 04:40:53.026934 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 30 04:40:53.026975 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 30 04:40:53.027017 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 30 04:40:53.027077 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 30 04:40:53.027133 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 30 04:40:53.027182 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.027235 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 30 04:40:53.027306 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 30 04:40:53.027375 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 30 04:40:53.027426 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 30 04:40:53.027479 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 30 04:40:53.027526 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 30 04:40:53.027574 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 30 04:40:53.027624 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 30 04:40:53.027673 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 30 04:40:53.027720 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 30 04:40:53.027774 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 30 04:40:53.027822 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 04:40:53.027877 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 30 04:40:53.027925 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 04:40:53.027975 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 30 04:40:53.028023 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 30 04:40:53.028072 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 30 04:40:53.028123 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 30 04:40:53.028179 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 30 04:40:53.028229 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 30 04:40:53.028316 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 30 04:40:53.028365 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 30 04:40:53.028414 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 30 04:40:53.028465 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 30 04:40:53.028513 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 30 04:40:53.028560 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 30 04:40:53.028606 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 30 04:40:53.028654 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 30 04:40:53.028701 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 30 04:40:53.028748 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 30 04:40:53.028798 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 30 04:40:53.028853 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 30 04:40:53.028902 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.028957 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 30 04:40:53.029009 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029062 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 30 04:40:53.029112 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029163 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 30 04:40:53.029212 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029287 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Apr 30 04:40:53.029359 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029410 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 30 04:40:53.029458 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 04:40:53.029510 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 30 04:40:53.029564 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 30 04:40:53.029616 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 30 04:40:53.029663 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 30 04:40:53.029717 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 30 04:40:53.029766 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 30 04:40:53.029821 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Apr 30 04:40:53.029871 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 30 04:40:53.029920 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 30 04:40:53.029971 kernel: pci 0000:01:00.0: PME# supported from D3cold Apr 30 04:40:53.030021 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 04:40:53.030070 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 04:40:53.030124 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Apr 30 04:40:53.030173 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 30 04:40:53.030222 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 30 04:40:53.030294 kernel: pci 0000:01:00.1: PME# supported from D3cold Apr 30 04:40:53.030360 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 04:40:53.030409 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 04:40:53.030458 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 04:40:53.030507 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 04:40:53.030555 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 04:40:53.030603 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 04:40:53.030657 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Apr 30 04:40:53.030709 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Apr 30 04:40:53.030758 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 30 04:40:53.030806 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Apr 30 04:40:53.030855 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 30 04:40:53.030903 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.030953 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 04:40:53.031001 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 04:40:53.031049 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 04:40:53.031106 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 30 04:40:53.031155 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 30 04:40:53.031205 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 30 04:40:53.031253 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Apr 30 04:40:53.031342 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 30 04:40:53.031391 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.031440 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 04:40:53.031490 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 04:40:53.031539 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 04:40:53.031588 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 04:40:53.031644 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Apr 30 04:40:53.031695 kernel: pci 0000:06:00.0: enabling Extended Tags Apr 30 04:40:53.031744 kernel: pci 0000:06:00.0: supports D1 D2 Apr 30 04:40:53.031794 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 04:40:53.031844 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 04:40:53.031893 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.031940 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.031992 kernel: pci_bus 0000:07: extended config space not accessible Apr 30 04:40:53.032048 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Apr 30 04:40:53.032100 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 30 04:40:53.032151 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 30 04:40:53.032201 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Apr 30 04:40:53.032257 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 04:40:53.032353 kernel: pci 0000:07:00.0: supports D1 D2 Apr 30 04:40:53.032404 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 04:40:53.032455 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 04:40:53.032503 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.032553 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.032561 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 30 04:40:53.032569 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 30 04:40:53.032575 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 30 04:40:53.032580 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 30 04:40:53.032586 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 30 04:40:53.032592 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 30 04:40:53.032598 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 30 04:40:53.032603 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 30 04:40:53.032609 kernel: iommu: Default domain type: Translated Apr 30 04:40:53.032614 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 04:40:53.032621 kernel: PCI: Using ACPI for IRQ routing Apr 30 04:40:53.032627 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 04:40:53.032632 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 30 04:40:53.032639 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Apr 30 04:40:53.032644 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Apr 30 04:40:53.032650 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Apr 30 04:40:53.032655 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 30 04:40:53.032661 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 30 04:40:53.032709 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Apr 30 04:40:53.032763 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Apr 30 04:40:53.032814 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 04:40:53.032823 kernel: vgaarb: loaded Apr 30 04:40:53.032829 kernel: clocksource: Switched to clocksource tsc-early Apr 30 04:40:53.032834 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 04:40:53.032840 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 04:40:53.032846 kernel: pnp: PnP ACPI init Apr 30 04:40:53.032894 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 30 04:40:53.032945 kernel: pnp 00:02: [dma 0 disabled] Apr 30 04:40:53.032992 kernel: pnp 00:03: [dma 0 disabled] Apr 30 04:40:53.033042 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 30 04:40:53.033086 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 30 04:40:53.033133 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 30 04:40:53.033179 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 30 04:40:53.033226 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 30 04:40:53.033271 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 30 04:40:53.033361 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 30 04:40:53.033406 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 30 04:40:53.033451 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 30 04:40:53.033493 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 30 04:40:53.033538 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 30 04:40:53.033588 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 30 04:40:53.033632 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 30 04:40:53.033676 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 30 04:40:53.033718 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 30 04:40:53.033762 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 30 04:40:53.033804 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 30 04:40:53.033848 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 30 04:40:53.033896 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 30 04:40:53.033905 kernel: pnp: PnP ACPI: found 10 devices Apr 30 04:40:53.033911 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 04:40:53.033917 kernel: NET: Registered PF_INET protocol family Apr 30 04:40:53.033922 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 04:40:53.033928 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 30 04:40:53.033934 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 04:40:53.033940 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 04:40:53.033947 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 04:40:53.033953 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 30 04:40:53.033958 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.033964 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.033970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 04:40:53.033975 kernel: NET: Registered PF_XDP protocol family Apr 30 04:40:53.034024 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 30 04:40:53.034071 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 30 04:40:53.034120 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 30 04:40:53.034172 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034221 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034297 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034365 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034414 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 04:40:53.034462 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 04:40:53.034510 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 04:40:53.034557 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 04:40:53.034608 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 04:40:53.034655 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 04:40:53.034703 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 04:40:53.034750 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 04:40:53.034801 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 04:40:53.034849 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 04:40:53.034897 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 04:40:53.034945 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 04:40:53.034995 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.035044 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035092 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 04:40:53.035141 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.035188 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035235 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 30 04:40:53.035324 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 04:40:53.035367 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 04:40:53.035409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 04:40:53.035452 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 30 04:40:53.035493 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 30 04:40:53.035543 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Apr 30 04:40:53.035588 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 04:40:53.035641 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Apr 30 04:40:53.035684 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Apr 30 04:40:53.035733 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 04:40:53.035777 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Apr 30 04:40:53.035825 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Apr 30 04:40:53.035868 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035917 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 30 04:40:53.035964 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035972 kernel: PCI: CLS 64 bytes, default 64 Apr 30 04:40:53.035978 kernel: DMAR: No ATSR found Apr 30 04:40:53.035984 kernel: DMAR: No SATC found Apr 30 04:40:53.035989 kernel: DMAR: dmar0: Using Queued invalidation Apr 30 04:40:53.036037 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 30 04:40:53.036086 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 30 04:40:53.036136 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 30 04:40:53.036186 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 30 04:40:53.036234 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 30 04:40:53.036328 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 30 04:40:53.036375 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 30 04:40:53.036423 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 30 04:40:53.036470 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 30 04:40:53.036518 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 30 04:40:53.036566 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 30 04:40:53.036614 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 30 04:40:53.036661 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 30 04:40:53.036710 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 30 04:40:53.036759 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 30 04:40:53.036806 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 30 04:40:53.036855 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Apr 30 04:40:53.036901 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 30 04:40:53.036950 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 30 04:40:53.037000 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 30 04:40:53.037047 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 30 04:40:53.037096 kernel: pci 0000:01:00.0: Adding to iommu group 1 Apr 30 04:40:53.037146 kernel: pci 0000:01:00.1: Adding to iommu group 1 Apr 30 04:40:53.037194 kernel: pci 0000:03:00.0: Adding to iommu group 15 Apr 30 04:40:53.037244 kernel: pci 0000:04:00.0: Adding to iommu group 16 Apr 30 04:40:53.037343 kernel: pci 0000:06:00.0: Adding to iommu group 17 Apr 30 04:40:53.037394 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 30 04:40:53.037404 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 30 04:40:53.037410 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 04:40:53.037416 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Apr 30 04:40:53.037422 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 30 04:40:53.037428 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 30 04:40:53.037434 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 30 04:40:53.037439 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 30 04:40:53.037490 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 30 04:40:53.037501 kernel: Initialise system trusted keyrings Apr 30 04:40:53.037506 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 30 04:40:53.037512 kernel: Key type asymmetric registered Apr 30 04:40:53.037518 kernel: Asymmetric key parser 'x509' registered Apr 30 04:40:53.037523 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 04:40:53.037529 kernel: io scheduler mq-deadline registered Apr 30 04:40:53.037535 kernel: io scheduler kyber registered Apr 30 04:40:53.037540 kernel: io scheduler bfq registered Apr 30 04:40:53.037586 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 30 04:40:53.037637 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Apr 30 04:40:53.037684 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Apr 30 04:40:53.037732 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Apr 30 04:40:53.037779 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Apr 30 04:40:53.037827 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Apr 30 04:40:53.037883 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 30 04:40:53.037892 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 30 04:40:53.037900 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 30 04:40:53.037905 kernel: pstore: Using crash dump compression: deflate Apr 30 04:40:53.037911 kernel: pstore: Registered erst as persistent store backend Apr 30 04:40:53.037917 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 04:40:53.037923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 04:40:53.037929 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 04:40:53.037934 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 04:40:53.037940 kernel: hpet_acpi_add: no address or irqs in _CRS Apr 30 04:40:53.037988 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 30 04:40:53.037998 kernel: i8042: PNP: No PS/2 controller found. Apr 30 04:40:53.038041 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 30 04:40:53.038086 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 30 04:40:53.038131 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-04-30T04:40:51 UTC (1745988051) Apr 30 04:40:53.038174 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 30 04:40:53.038182 kernel: intel_pstate: Intel P-state driver initializing Apr 30 04:40:53.038188 kernel: intel_pstate: Disabling energy efficiency optimization Apr 30 04:40:53.038195 kernel: intel_pstate: HWP enabled Apr 30 04:40:53.038201 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Apr 30 04:40:53.038207 kernel: vesafb: scrolling: redraw Apr 30 04:40:53.038212 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Apr 30 04:40:53.038218 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000021dabbe3, using 768k, total 768k Apr 30 04:40:53.038224 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 04:40:53.038230 kernel: fb0: VESA VGA frame buffer device Apr 30 04:40:53.038235 kernel: NET: Registered PF_INET6 protocol family Apr 30 04:40:53.038241 kernel: Segment Routing with IPv6 Apr 30 04:40:53.038248 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 04:40:53.038254 kernel: NET: Registered PF_PACKET protocol family Apr 30 04:40:53.038262 kernel: Key type dns_resolver registered Apr 30 04:40:53.038267 kernel: microcode: Current revision: 0x000000fc Apr 30 04:40:53.038299 kernel: microcode: Updated early from: 0x000000f4 Apr 30 04:40:53.038304 kernel: microcode: Microcode Update Driver: v2.2. Apr 30 04:40:53.038330 kernel: IPI shorthand broadcast: enabled Apr 30 04:40:53.038336 kernel: sched_clock: Marking stable (2487000672, 1378401079)->(4414490257, -549088506) Apr 30 04:40:53.038341 kernel: registered taskstats version 1 Apr 30 04:40:53.038347 kernel: Loading compiled-in X.509 certificates Apr 30 04:40:53.038354 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 04:40:53.038359 kernel: Key type .fscrypt registered Apr 30 04:40:53.038365 kernel: Key type fscrypt-provisioning registered Apr 30 04:40:53.038371 kernel: ima: Allocated hash algorithm: sha1 Apr 30 04:40:53.038376 kernel: ima: No architecture policies found Apr 30 04:40:53.038382 kernel: clk: Disabling unused clocks Apr 30 04:40:53.038387 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 04:40:53.038393 kernel: Write protecting the kernel read-only data: 36864k Apr 30 04:40:53.038400 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 04:40:53.038405 kernel: Run /init as init process Apr 30 04:40:53.038411 kernel: with arguments: Apr 30 04:40:53.038417 kernel: /init Apr 30 04:40:53.038422 kernel: with environment: Apr 30 04:40:53.038428 kernel: HOME=/ Apr 30 04:40:53.038433 kernel: TERM=linux Apr 30 04:40:53.038439 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 04:40:53.038446 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 04:40:53.038454 systemd[1]: Detected architecture x86-64. Apr 30 04:40:53.038460 systemd[1]: Running in initrd. Apr 30 04:40:53.038466 systemd[1]: No hostname configured, using default hostname. Apr 30 04:40:53.038472 systemd[1]: Hostname set to . Apr 30 04:40:53.038477 systemd[1]: Initializing machine ID from random generator. Apr 30 04:40:53.038483 systemd[1]: Queued start job for default target initrd.target. Apr 30 04:40:53.038489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 04:40:53.038496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 04:40:53.038502 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 04:40:53.038508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 04:40:53.038514 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 04:40:53.038520 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 04:40:53.038527 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 04:40:53.038533 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Apr 30 04:40:53.038539 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Apr 30 04:40:53.038545 kernel: clocksource: Switched to clocksource tsc Apr 30 04:40:53.038551 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 04:40:53.038557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 04:40:53.038563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 04:40:53.038569 systemd[1]: Reached target paths.target - Path Units. Apr 30 04:40:53.038575 systemd[1]: Reached target slices.target - Slice Units. Apr 30 04:40:53.038581 systemd[1]: Reached target swap.target - Swaps. Apr 30 04:40:53.038587 systemd[1]: Reached target timers.target - Timer Units. Apr 30 04:40:53.038593 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 04:40:53.038599 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 04:40:53.038605 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 04:40:53.038611 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 04:40:53.038617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 04:40:53.038623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 04:40:53.038629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 04:40:53.038635 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 04:40:53.038641 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 04:40:53.038647 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 04:40:53.038653 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 04:40:53.038659 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 04:40:53.038665 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 04:40:53.038681 systemd-journald[267]: Collecting audit messages is disabled. Apr 30 04:40:53.038696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 04:40:53.038703 systemd-journald[267]: Journal started Apr 30 04:40:53.038716 systemd-journald[267]: Runtime Journal (/run/log/journal/271bb88c2fdd4a608d2f541669887d36) is 8.0M, max 639.9M, 631.9M free. Apr 30 04:40:53.052081 systemd-modules-load[269]: Inserted module 'overlay' Apr 30 04:40:53.074267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 04:40:53.102850 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 04:40:53.166454 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 04:40:53.166468 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 04:40:53.166477 kernel: Bridge firewalling registered Apr 30 04:40:53.144988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 04:40:53.163753 systemd-modules-load[269]: Inserted module 'br_netfilter' Apr 30 04:40:53.178583 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 04:40:53.189712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 04:40:53.214952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:40:53.249411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 04:40:53.273485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 04:40:53.277124 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 04:40:53.277825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 04:40:53.283180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 04:40:53.284063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 04:40:53.284796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 04:40:53.285752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 04:40:53.286497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 04:40:53.289628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 04:40:53.302507 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:40:53.304624 systemd-resolved[299]: Positive Trust Anchors: Apr 30 04:40:53.304629 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 04:40:53.304655 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 04:40:53.306391 systemd-resolved[299]: Defaulting to hostname 'linux'. Apr 30 04:40:53.335676 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 04:40:53.352827 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 04:40:53.374555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 04:40:53.499899 dracut-cmdline[311]: dracut-dracut-053 Apr 30 04:40:53.508477 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 04:40:53.706288 kernel: SCSI subsystem initialized Apr 30 04:40:53.728260 kernel: Loading iSCSI transport class v2.0-870. Apr 30 04:40:53.751288 kernel: iscsi: registered transport (tcp) Apr 30 04:40:53.782069 kernel: iscsi: registered transport (qla4xxx) Apr 30 04:40:53.782086 kernel: QLogic iSCSI HBA Driver Apr 30 04:40:53.814828 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 04:40:53.827662 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 04:40:53.910183 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 04:40:53.910246 kernel: device-mapper: uevent: version 1.0.3 Apr 30 04:40:53.929818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 04:40:53.988292 kernel: raid6: avx2x4 gen() 53056 MB/s Apr 30 04:40:54.020331 kernel: raid6: avx2x2 gen() 53271 MB/s Apr 30 04:40:54.057042 kernel: raid6: avx2x1 gen() 44020 MB/s Apr 30 04:40:54.057062 kernel: raid6: using algorithm avx2x2 gen() 53271 MB/s Apr 30 04:40:54.105009 kernel: raid6: .... xor() 30344 MB/s, rmw enabled Apr 30 04:40:54.105026 kernel: raid6: using avx2x2 recovery algorithm Apr 30 04:40:54.145261 kernel: xor: automatically using best checksumming function avx Apr 30 04:40:54.258294 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 04:40:54.264449 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 04:40:54.287578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 04:40:54.294325 systemd-udevd[496]: Using default interface naming scheme 'v255'. Apr 30 04:40:54.298345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 04:40:54.335490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 04:40:54.381281 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Apr 30 04:40:54.399218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 04:40:54.410519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 04:40:54.505045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 04:40:54.530272 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 04:40:54.530310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 04:40:54.539219 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 04:40:54.566286 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 04:40:54.566312 kernel: PTP clock support registered Apr 30 04:40:54.579047 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 04:40:54.579153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:40:54.585262 kernel: libata version 3.00 loaded. Apr 30 04:40:54.607885 kernel: ACPI: bus type USB registered Apr 30 04:40:54.607904 kernel: usbcore: registered new interface driver usbfs Apr 30 04:40:54.617340 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 04:40:54.647350 kernel: usbcore: registered new interface driver hub Apr 30 04:40:54.647411 kernel: usbcore: registered new device driver usb Apr 30 04:40:54.667629 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 04:40:54.667668 kernel: AES CTR mode by8 optimization enabled Apr 30 04:40:54.677314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 04:40:54.677370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:40:54.695375 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 04:40:54.713787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 04:40:54.741486 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 04:40:55.888825 kernel: ahci 0000:00:17.0: version 3.0 Apr 30 04:40:55.888954 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 04:40:55.889055 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Apr 30 04:40:55.889234 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 30 04:40:55.889327 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Apr 30 04:40:55.889428 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 30 04:40:55.889534 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 04:40:55.889642 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 30 04:40:55.889720 kernel: scsi host0: ahci Apr 30 04:40:55.889785 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 04:40:55.889852 kernel: scsi host1: ahci Apr 30 04:40:55.889914 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 30 04:40:55.889984 kernel: scsi host2: ahci Apr 30 04:40:55.890050 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 30 04:40:55.890113 kernel: scsi host3: ahci Apr 30 04:40:55.890175 kernel: hub 1-0:1.0: USB hub found Apr 30 04:40:55.890254 kernel: scsi host4: ahci Apr 30 04:40:55.890326 kernel: hub 1-0:1.0: 16 ports detected Apr 30 04:40:55.890397 kernel: scsi host5: ahci Apr 30 04:40:55.890482 kernel: hub 2-0:1.0: USB hub found Apr 30 04:40:55.890599 kernel: scsi host6: ahci Apr 30 04:40:55.890670 kernel: hub 2-0:1.0: 10 ports detected Apr 30 04:40:55.890738 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Apr 30 04:40:55.890749 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 30 04:40:55.890757 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Apr 30 04:40:55.890765 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 30 04:40:55.890772 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Apr 30 04:40:55.890779 kernel: igb 0000:03:00.0: added PHC on eth0 Apr 30 04:40:55.890848 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Apr 30 04:40:55.890856 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 04:40:55.890921 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Apr 30 04:40:55.890929 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:b6 Apr 30 04:40:55.890993 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Apr 30 04:40:55.891001 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Apr 30 04:40:55.891063 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Apr 30 04:40:55.891071 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 04:40:55.891132 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Apr 30 04:40:55.891196 kernel: igb 0000:04:00.0: added PHC on eth1 Apr 30 04:40:55.891266 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Apr 30 04:40:55.891330 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 04:40:55.891399 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 30 04:40:55.891516 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:b7 Apr 30 04:40:55.891635 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Apr 30 04:40:55.891717 kernel: hub 1-14:1.0: USB hub found Apr 30 04:40:55.891790 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 04:40:55.891851 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:55.891859 kernel: hub 1-14:1.0: 4 ports detected Apr 30 04:40:55.891927 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 04:40:55.891935 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 04:40:55.891996 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 04:40:55.892004 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Apr 30 04:40:56.026074 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026085 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 04:40:56.026162 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Apr 30 04:40:56.026175 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026183 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026190 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 30 04:40:56.026305 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026314 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Apr 30 04:40:56.026321 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 04:40:56.026329 kernel: ata1.00: Features: NCQ-prio Apr 30 04:40:56.026336 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 04:40:56.026346 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Apr 30 04:40:56.026416 kernel: ata2.00: Features: NCQ-prio Apr 30 04:40:56.026424 kernel: ata1.00: configured for UDMA/133 Apr 30 04:40:56.026432 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Apr 30 04:40:56.026496 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Apr 30 04:40:56.504293 kernel: ata2.00: configured for UDMA/133 Apr 30 04:40:56.504311 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 04:40:56.504348 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Apr 30 04:40:56.504518 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Apr 30 04:40:56.504612 kernel: usbcore: registered new interface driver usbhid Apr 30 04:40:56.504621 kernel: usbhid: USB HID core driver Apr 30 04:40:56.504629 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 30 04:40:56.504636 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Apr 30 04:40:56.504735 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.504750 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 04:40:56.504764 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 04:40:56.504869 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Apr 30 04:40:56.504970 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 04:40:56.505057 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Apr 30 04:40:56.505142 kernel: sd 1:0:0:0: [sda] Write Protect is off Apr 30 04:40:56.505229 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 30 04:40:56.505311 kernel: sd 0:0:0:0: [sdb] Write Protect is off Apr 30 04:40:56.505382 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 30 04:40:56.505448 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 04:40:56.505530 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 04:40:56.505606 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Apr 30 04:40:56.505681 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 30 04:40:56.505773 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 30 04:40:56.505784 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 30 04:40:56.505859 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 04:40:56.505871 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Apr 30 04:40:56.505936 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 04:40:56.506013 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Apr 30 04:40:56.506074 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.506083 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 04:40:56.506091 kernel: GPT:9289727 != 937703087 Apr 30 04:40:56.506098 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 04:40:56.506105 kernel: GPT:9289727 != 937703087 Apr 30 04:40:56.506114 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 04:40:56.506121 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:56.506128 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Apr 30 04:40:56.506189 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Apr 30 04:40:55.609082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 04:40:56.603800 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (553) Apr 30 04:40:56.603815 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Apr 30 04:40:56.603908 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (561) Apr 30 04:40:55.888824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 04:40:55.888907 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 04:40:55.947393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 04:40:56.012470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:40:56.043491 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 04:40:56.369164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 04:40:56.546259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Apr 30 04:40:56.607501 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Apr 30 04:40:56.611813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Apr 30 04:40:56.615650 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Apr 30 04:40:56.629500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Apr 30 04:40:56.652659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:40:56.696729 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 04:40:56.738382 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.738398 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:56.738459 disk-uuid[719]: Primary Header is updated. Apr 30 04:40:56.738459 disk-uuid[719]: Secondary Entries is updated. Apr 30 04:40:56.738459 disk-uuid[719]: Secondary Header is updated. Apr 30 04:40:56.791342 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.791355 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:56.791363 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.818302 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:57.797175 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:57.816926 disk-uuid[720]: The operation has completed successfully. Apr 30 04:40:57.826391 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:57.853560 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 04:40:57.853624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 04:40:57.892502 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 04:40:57.929382 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 04:40:57.929440 sh[738]: Success Apr 30 04:40:57.962711 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 04:40:57.989441 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 04:40:57.997657 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 04:40:58.055326 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 04:40:58.055347 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:40:58.076106 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 04:40:58.094246 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 04:40:58.111314 kernel: BTRFS info (device dm-0): using free space tree Apr 30 04:40:58.148306 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 04:40:58.149082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 04:40:58.157777 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 04:40:58.165606 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 04:40:58.188695 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 04:40:58.228273 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:40:58.228294 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:40:58.245665 kernel: BTRFS info (device sdb6): using free space tree Apr 30 04:40:58.280197 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 04:40:58.338517 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 04:40:58.338535 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 04:40:58.338543 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:40:58.327625 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 04:40:58.360486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 04:40:58.371041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 04:40:58.427903 ignition[920]: Ignition 2.19.0 Apr 30 04:40:58.427908 ignition[920]: Stage: fetch-offline Apr 30 04:40:58.430099 unknown[920]: fetched base config from "system" Apr 30 04:40:58.427931 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 30 04:40:58.430103 unknown[920]: fetched user config from "system" Apr 30 04:40:58.427937 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:40:58.430739 systemd-networkd[922]: lo: Link UP Apr 30 04:40:58.427989 ignition[920]: parsed url from cmdline: "" Apr 30 04:40:58.430741 systemd-networkd[922]: lo: Gained carrier Apr 30 04:40:58.427991 ignition[920]: no config URL provided Apr 30 04:40:58.431015 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 04:40:58.427994 ignition[920]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 04:40:58.433036 systemd-networkd[922]: Enumeration completed Apr 30 04:40:58.428016 ignition[920]: parsing config with SHA512: 52390592a6561c3e7e03ff11e67fc20a2533d35e23a56bfdf2da69f22fcf0ab2057fb6db72041664619ba82d0e4b4fb516e2623dbc85564656b13a2cbac60527 Apr 30 04:40:58.433484 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 04:40:58.430328 ignition[920]: fetch-offline: fetch-offline passed Apr 30 04:40:58.433876 systemd-networkd[922]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.430331 ignition[920]: POST message to Packet Timeline Apr 30 04:40:58.459800 systemd[1]: Reached target network.target - Network. Apr 30 04:40:58.430333 ignition[920]: POST Status error: resource requires networking Apr 30 04:40:58.462767 systemd-networkd[922]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.430368 ignition[920]: Ignition finished successfully Apr 30 04:40:58.466487 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 04:40:58.490531 ignition[935]: Ignition 2.19.0 Apr 30 04:40:58.473580 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 04:40:58.677445 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 04:40:58.490538 ignition[935]: Stage: kargs Apr 30 04:40:58.490925 systemd-networkd[922]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.490649 ignition[935]: no configs at "/usr/lib/ignition/base.d" Apr 30 04:40:58.669196 systemd-networkd[922]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.490655 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:40:58.491189 ignition[935]: kargs: kargs passed Apr 30 04:40:58.491192 ignition[935]: POST message to Packet Timeline Apr 30 04:40:58.491202 ignition[935]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:40:58.491723 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51940->[::1]:53: read: connection refused Apr 30 04:40:58.691831 ignition[935]: GET https://metadata.packet.net/metadata: attempt #2 Apr 30 04:40:58.692447 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56842->[::1]:53: read: connection refused Apr 30 04:40:58.859300 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 04:40:58.860636 systemd-networkd[922]: eno1: Link UP Apr 30 04:40:58.860773 systemd-networkd[922]: eno2: Link UP Apr 30 04:40:58.860900 systemd-networkd[922]: enp1s0f0np0: Link UP Apr 30 04:40:58.861049 systemd-networkd[922]: enp1s0f0np0: Gained carrier Apr 30 04:40:58.876519 systemd-networkd[922]: enp1s0f1np1: Link UP Apr 30 04:40:58.908506 systemd-networkd[922]: enp1s0f0np0: DHCPv4 address 147.75.90.169/31, gateway 147.75.90.168 acquired from 145.40.83.140 Apr 30 04:40:59.093271 ignition[935]: GET https://metadata.packet.net/metadata: attempt #3 Apr 30 04:40:59.094347 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56785->[::1]:53: read: connection refused Apr 30 04:40:59.690048 systemd-networkd[922]: enp1s0f1np1: Gained carrier Apr 30 04:40:59.894780 ignition[935]: GET https://metadata.packet.net/metadata: attempt #4 Apr 30 04:40:59.895988 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39603->[::1]:53: read: connection refused Apr 30 04:41:00.265863 systemd-networkd[922]: enp1s0f0np0: Gained IPv6LL Apr 30 04:41:00.969879 systemd-networkd[922]: enp1s0f1np1: Gained IPv6LL Apr 30 04:41:01.497386 ignition[935]: GET https://metadata.packet.net/metadata: attempt #5 Apr 30 04:41:01.498473 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53698->[::1]:53: read: connection refused Apr 30 04:41:04.700988 ignition[935]: GET https://metadata.packet.net/metadata: attempt #6 Apr 30 04:41:05.699461 ignition[935]: GET result: OK Apr 30 04:41:06.106492 ignition[935]: Ignition finished successfully Apr 30 04:41:06.110750 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 04:41:06.132535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 04:41:06.142714 ignition[953]: Ignition 2.19.0 Apr 30 04:41:06.142721 ignition[953]: Stage: disks Apr 30 04:41:06.142941 ignition[953]: no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:06.142950 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:06.143650 ignition[953]: disks: disks passed Apr 30 04:41:06.143653 ignition[953]: POST message to Packet Timeline Apr 30 04:41:06.143665 ignition[953]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:07.112546 ignition[953]: GET result: OK Apr 30 04:41:07.449517 ignition[953]: Ignition finished successfully Apr 30 04:41:07.451106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 04:41:07.468456 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 04:41:07.487597 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 04:41:07.509672 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 04:41:07.521828 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 04:41:07.549642 systemd[1]: Reached target basic.target - Basic System. Apr 30 04:41:07.583546 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 04:41:07.617201 systemd-fsck[970]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 04:41:07.628633 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 04:41:07.651492 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 04:41:07.748849 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 04:41:07.764485 kernel: EXT4-fs (sdb9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 04:41:07.757684 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 04:41:07.785585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 04:41:07.794029 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 04:41:07.815260 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (980) Apr 30 04:41:07.845459 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:41:07.845475 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:41:07.846014 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 04:41:07.908371 kernel: BTRFS info (device sdb6): using free space tree Apr 30 04:41:07.908382 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 04:41:07.908390 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 04:41:07.918536 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Apr 30 04:41:07.919576 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 04:41:07.919593 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 04:41:07.985486 coreos-metadata[982]: Apr 30 04:41:07.966 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 04:41:07.939394 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 04:41:08.014370 coreos-metadata[998]: Apr 30 04:41:07.966 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 04:41:07.975555 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 04:41:08.008526 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 04:41:08.054506 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 04:41:08.064350 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Apr 30 04:41:08.074390 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 04:41:08.084398 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 04:41:08.091867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 04:41:08.126520 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 04:41:08.165469 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:41:08.155924 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 04:41:08.174048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 04:41:08.189388 ignition[1104]: INFO : Ignition 2.19.0 Apr 30 04:41:08.189388 ignition[1104]: INFO : Stage: mount Apr 30 04:41:08.189388 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:08.189388 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:08.189388 ignition[1104]: INFO : mount: mount passed Apr 30 04:41:08.189388 ignition[1104]: INFO : POST message to Packet Timeline Apr 30 04:41:08.189388 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:08.190358 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 04:41:08.837671 coreos-metadata[998]: Apr 30 04:41:08.837 INFO Fetch successful Apr 30 04:41:08.872145 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 30 04:41:08.872215 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Apr 30 04:41:08.934866 coreos-metadata[982]: Apr 30 04:41:08.934 INFO Fetch successful Apr 30 04:41:08.967270 coreos-metadata[982]: Apr 30 04:41:08.967 INFO wrote hostname ci-4081.3.3-a-671b97f93d to /sysroot/etc/hostname Apr 30 04:41:08.968815 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 04:41:09.241203 ignition[1104]: INFO : GET result: OK Apr 30 04:41:09.593027 ignition[1104]: INFO : Ignition finished successfully Apr 30 04:41:09.595770 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 04:41:09.633776 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 04:41:09.650542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 04:41:09.691262 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1129) Apr 30 04:41:09.719568 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:41:09.719585 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:41:09.736307 kernel: BTRFS info (device sdb6): using free space tree Apr 30 04:41:09.772867 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 04:41:09.772883 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 04:41:09.785206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 04:41:09.822082 ignition[1146]: INFO : Ignition 2.19.0 Apr 30 04:41:09.822082 ignition[1146]: INFO : Stage: files Apr 30 04:41:09.836525 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:09.836525 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:09.836525 ignition[1146]: DEBUG : files: compiled without relabeling support, skipping Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 04:41:09.836525 ignition[1146]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 04:41:09.826022 unknown[1146]: wrote ssh authorized keys file for user: core Apr 30 04:41:09.971521 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 04:41:10.245349 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 04:41:10.245349 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 04:41:10.823931 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 04:41:11.053189 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:11.053189 ignition[1146]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: files passed Apr 30 04:41:11.084500 ignition[1146]: INFO : POST message to Packet Timeline Apr 30 04:41:11.084500 ignition[1146]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:12.039659 ignition[1146]: INFO : GET result: OK Apr 30 04:41:12.398080 ignition[1146]: INFO : Ignition finished successfully Apr 30 04:41:12.401302 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 04:41:12.434527 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 04:41:12.434945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 04:41:12.463758 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 04:41:12.463839 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 04:41:12.503723 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 04:41:12.503723 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 04:41:12.517636 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 04:41:12.505868 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 04:41:12.542754 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 04:41:12.582429 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 04:41:12.626150 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 04:41:12.626198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 04:41:12.644752 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 04:41:12.655530 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 04:41:12.682566 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 04:41:12.697491 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 04:41:12.749350 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 04:41:12.775514 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 04:41:12.791117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 04:41:12.798551 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 04:41:12.829672 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 04:41:12.848965 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 04:41:12.849385 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 04:41:12.877080 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 04:41:12.898832 systemd[1]: Stopped target basic.target - Basic System. Apr 30 04:41:12.916855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 04:41:12.935833 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 04:41:12.958987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 04:41:12.979875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 04:41:12.999976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 04:41:13.020905 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 04:41:13.041892 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 04:41:13.061865 systemd[1]: Stopped target swap.target - Swaps. Apr 30 04:41:13.081866 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 04:41:13.082294 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 04:41:13.118748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 04:41:13.128870 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 04:41:13.149746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 04:41:13.150193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 04:41:13.173764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 04:41:13.174160 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 04:41:13.205835 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 04:41:13.206314 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 04:41:13.226086 systemd[1]: Stopped target paths.target - Path Units. Apr 30 04:41:13.244734 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 04:41:13.245171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 04:41:13.266993 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 04:41:13.285985 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 04:41:13.305948 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 04:41:13.306253 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 04:41:13.325910 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 04:41:13.326204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 04:41:13.348948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 04:41:13.470491 ignition[1211]: INFO : Ignition 2.19.0 Apr 30 04:41:13.470491 ignition[1211]: INFO : Stage: umount Apr 30 04:41:13.470491 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:13.470491 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:13.470491 ignition[1211]: INFO : umount: umount passed Apr 30 04:41:13.470491 ignition[1211]: INFO : POST message to Packet Timeline Apr 30 04:41:13.470491 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:13.349366 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 04:41:13.369960 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 04:41:13.370356 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 04:41:13.387936 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 04:41:13.388340 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 04:41:13.423526 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 04:41:13.438392 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 04:41:13.438609 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 04:41:13.470581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 04:41:13.478481 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 04:41:13.478831 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 04:41:13.486078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 04:41:13.486507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 04:41:13.535728 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 04:41:13.536102 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 04:41:13.536151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 04:41:13.545499 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 04:41:13.545553 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 04:41:14.268678 ignition[1211]: INFO : GET result: OK Apr 30 04:41:14.703641 ignition[1211]: INFO : Ignition finished successfully Apr 30 04:41:14.706195 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 04:41:14.706474 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 04:41:14.724570 systemd[1]: Stopped target network.target - Network. Apr 30 04:41:14.739513 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 04:41:14.739774 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 04:41:14.757786 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 04:41:14.757948 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 04:41:14.775748 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 04:41:14.775906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 04:41:14.783918 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 04:41:14.784081 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 04:41:14.811759 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 04:41:14.811927 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 04:41:14.820305 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 04:41:14.829398 systemd-networkd[922]: enp1s0f1np1: DHCPv6 lease lost Apr 30 04:41:14.836494 systemd-networkd[922]: enp1s0f0np0: DHCPv6 lease lost Apr 30 04:41:14.846844 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 04:41:14.865427 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 04:41:14.865698 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 04:41:14.884712 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 04:41:14.885071 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 04:41:14.905078 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 04:41:14.905298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 04:41:14.935450 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 04:41:14.961406 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 04:41:14.961449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 04:41:14.980519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 04:41:14.980606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 04:41:14.998695 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 04:41:14.998857 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 04:41:15.018744 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 04:41:15.018910 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 04:41:15.038875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 04:41:15.060541 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 04:41:15.060908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 04:41:15.089774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 04:41:15.089811 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 04:41:15.116371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 04:41:15.116400 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 04:41:15.136486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 04:41:15.136572 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 04:41:15.167446 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 04:41:15.167615 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 04:41:15.205439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 04:41:15.205598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:41:15.252454 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 04:41:15.275415 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 04:41:15.482428 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Apr 30 04:41:15.275562 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 04:41:15.297553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 04:41:15.297678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:41:15.319572 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 04:41:15.319803 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 04:41:15.349113 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 04:41:15.349397 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 04:41:15.362593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 04:41:15.399701 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 04:41:15.422943 systemd[1]: Switching root. Apr 30 04:41:15.576409 systemd-journald[267]: Journal stopped Apr 30 04:40:53.024807 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 04:40:53.024822 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 04:40:53.024829 kernel: BIOS-provided physical RAM map: Apr 30 04:40:53.024833 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 30 04:40:53.024837 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 30 04:40:53.024841 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 30 04:40:53.024846 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 30 04:40:53.024850 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 30 04:40:53.024854 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Apr 30 04:40:53.024858 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Apr 30 04:40:53.024862 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Apr 30 04:40:53.024867 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Apr 30 04:40:53.024871 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Apr 30 04:40:53.024875 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Apr 30 04:40:53.024880 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Apr 30 04:40:53.024885 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Apr 30 04:40:53.024890 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 30 04:40:53.024895 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 30 04:40:53.024899 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 04:40:53.024904 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 30 04:40:53.024908 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 30 04:40:53.024913 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 30 04:40:53.024917 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 30 04:40:53.024922 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 30 04:40:53.024926 kernel: NX (Execute Disable) protection: active Apr 30 04:40:53.024931 kernel: APIC: Static calls initialized Apr 30 04:40:53.024936 kernel: SMBIOS 3.2.1 present. Apr 30 04:40:53.024940 kernel: DMI: Supermicro X11SCM-F/X11SCM-F, BIOS 1.9 09/16/2022 Apr 30 04:40:53.024946 kernel: tsc: Detected 3400.000 MHz processor Apr 30 04:40:53.024950 kernel: tsc: Detected 3399.906 MHz TSC Apr 30 04:40:53.024955 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 04:40:53.024960 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 04:40:53.024965 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 30 04:40:53.024970 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Apr 30 04:40:53.024974 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 04:40:53.024979 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 30 04:40:53.024984 kernel: Using GB pages for direct mapping Apr 30 04:40:53.024989 kernel: ACPI: Early table checksum verification disabled Apr 30 04:40:53.024994 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 30 04:40:53.024999 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 30 04:40:53.025005 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Apr 30 04:40:53.025010 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 30 04:40:53.025015 kernel: ACPI: FACS 0x000000008C66CF80 000040 Apr 30 04:40:53.025020 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Apr 30 04:40:53.025026 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Apr 30 04:40:53.025031 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 30 04:40:53.025036 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 30 04:40:53.025041 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 30 04:40:53.025046 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 30 04:40:53.025051 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 30 04:40:53.025056 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 30 04:40:53.025062 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025067 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 30 04:40:53.025072 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 30 04:40:53.025077 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025081 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025086 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 30 04:40:53.025091 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 30 04:40:53.025097 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025102 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 30 04:40:53.025108 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 30 04:40:53.025112 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Apr 30 04:40:53.025117 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 30 04:40:53.025122 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 30 04:40:53.025127 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 30 04:40:53.025132 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 30 04:40:53.025137 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 30 04:40:53.025142 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 30 04:40:53.025148 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 30 04:40:53.025153 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 30 04:40:53.025158 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 30 04:40:53.025163 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Apr 30 04:40:53.025168 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Apr 30 04:40:53.025173 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Apr 30 04:40:53.025178 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Apr 30 04:40:53.025183 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Apr 30 04:40:53.025188 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Apr 30 04:40:53.025194 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Apr 30 04:40:53.025199 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Apr 30 04:40:53.025203 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Apr 30 04:40:53.025208 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Apr 30 04:40:53.025213 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Apr 30 04:40:53.025218 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Apr 30 04:40:53.025223 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Apr 30 04:40:53.025228 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Apr 30 04:40:53.025233 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Apr 30 04:40:53.025239 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Apr 30 04:40:53.025244 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Apr 30 04:40:53.025249 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Apr 30 04:40:53.025254 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Apr 30 04:40:53.025262 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Apr 30 04:40:53.025267 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Apr 30 04:40:53.025290 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Apr 30 04:40:53.025296 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Apr 30 04:40:53.025301 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Apr 30 04:40:53.025321 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Apr 30 04:40:53.025326 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Apr 30 04:40:53.025331 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Apr 30 04:40:53.025336 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Apr 30 04:40:53.025341 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Apr 30 04:40:53.025346 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Apr 30 04:40:53.025351 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Apr 30 04:40:53.025356 kernel: No NUMA configuration found Apr 30 04:40:53.025361 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 30 04:40:53.025366 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 30 04:40:53.025372 kernel: Zone ranges: Apr 30 04:40:53.025377 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 04:40:53.025382 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 04:40:53.025387 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 30 04:40:53.025392 kernel: Movable zone start for each node Apr 30 04:40:53.025397 kernel: Early memory node ranges Apr 30 04:40:53.025401 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 30 04:40:53.025406 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 30 04:40:53.025411 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Apr 30 04:40:53.025417 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Apr 30 04:40:53.025422 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Apr 30 04:40:53.025427 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 30 04:40:53.025432 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 30 04:40:53.025440 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 30 04:40:53.025447 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 04:40:53.025452 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 30 04:40:53.025457 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 30 04:40:53.025463 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 30 04:40:53.025469 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 30 04:40:53.025474 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Apr 30 04:40:53.025479 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 30 04:40:53.025485 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 30 04:40:53.025490 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 30 04:40:53.025495 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 30 04:40:53.025501 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 30 04:40:53.025506 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 30 04:40:53.025512 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 30 04:40:53.025517 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 30 04:40:53.025523 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 30 04:40:53.025528 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 30 04:40:53.025533 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 30 04:40:53.025538 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 30 04:40:53.025544 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 30 04:40:53.025549 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 30 04:40:53.025554 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 30 04:40:53.025560 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 30 04:40:53.025566 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 30 04:40:53.025571 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 30 04:40:53.025577 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 30 04:40:53.025582 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 30 04:40:53.025587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 04:40:53.025592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 04:40:53.025598 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 04:40:53.025603 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 04:40:53.025609 kernel: TSC deadline timer available Apr 30 04:40:53.025615 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 30 04:40:53.025620 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 30 04:40:53.025625 kernel: Booting paravirtualized kernel on bare hardware Apr 30 04:40:53.025631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 04:40:53.025636 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 04:40:53.025642 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 04:40:53.025647 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 04:40:53.025652 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 04:40:53.025659 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 04:40:53.025664 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 04:40:53.025670 kernel: random: crng init done Apr 30 04:40:53.025675 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 30 04:40:53.025680 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 30 04:40:53.025686 kernel: Fallback order for Node 0: 0 Apr 30 04:40:53.025691 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Apr 30 04:40:53.025696 kernel: Policy zone: Normal Apr 30 04:40:53.025702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 04:40:53.025708 kernel: software IO TLB: area num 16. Apr 30 04:40:53.025713 kernel: Memory: 32720304K/33452980K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 732416K reserved, 0K cma-reserved) Apr 30 04:40:53.025719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 04:40:53.025724 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 04:40:53.025730 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 04:40:53.025735 kernel: Dynamic Preempt: voluntary Apr 30 04:40:53.025740 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 04:40:53.025746 kernel: rcu: RCU event tracing is enabled. Apr 30 04:40:53.025752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 04:40:53.025758 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 04:40:53.025763 kernel: Rude variant of Tasks RCU enabled. Apr 30 04:40:53.025768 kernel: Tracing variant of Tasks RCU enabled. Apr 30 04:40:53.025774 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 04:40:53.025779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 04:40:53.025784 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 30 04:40:53.025790 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 04:40:53.025795 kernel: Console: colour dummy device 80x25 Apr 30 04:40:53.025800 kernel: printk: console [tty0] enabled Apr 30 04:40:53.025806 kernel: printk: console [ttyS1] enabled Apr 30 04:40:53.025812 kernel: ACPI: Core revision 20230628 Apr 30 04:40:53.025817 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Apr 30 04:40:53.025822 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 04:40:53.025828 kernel: DMAR: Host address width 39 Apr 30 04:40:53.025833 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 30 04:40:53.025838 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 30 04:40:53.025844 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Apr 30 04:40:53.025849 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 30 04:40:53.025855 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 30 04:40:53.025861 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 30 04:40:53.025866 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 30 04:40:53.025871 kernel: x2apic enabled Apr 30 04:40:53.025877 kernel: APIC: Switched APIC routing to: cluster x2apic Apr 30 04:40:53.025882 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 30 04:40:53.025888 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 30 04:40:53.025893 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 30 04:40:53.025898 kernel: process: using mwait in idle threads Apr 30 04:40:53.025904 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 04:40:53.025910 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 04:40:53.025915 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 04:40:53.025920 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 04:40:53.025926 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 04:40:53.025931 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 30 04:40:53.025936 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 04:40:53.025941 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 30 04:40:53.025946 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 30 04:40:53.025952 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 04:40:53.025957 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 04:40:53.025963 kernel: TAA: Mitigation: TSX disabled Apr 30 04:40:53.025968 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 30 04:40:53.025974 kernel: SRBDS: Mitigation: Microcode Apr 30 04:40:53.025979 kernel: GDS: Mitigation: Microcode Apr 30 04:40:53.025984 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 04:40:53.025990 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 04:40:53.025995 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 04:40:53.026000 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 04:40:53.026005 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 04:40:53.026011 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 04:40:53.026016 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 04:40:53.026022 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 04:40:53.026027 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 30 04:40:53.026033 kernel: Freeing SMP alternatives memory: 32K Apr 30 04:40:53.026038 kernel: pid_max: default: 32768 minimum: 301 Apr 30 04:40:53.026043 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 04:40:53.026048 kernel: landlock: Up and running. Apr 30 04:40:53.026054 kernel: SELinux: Initializing. Apr 30 04:40:53.026059 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.026064 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.026070 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 30 04:40:53.026075 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 04:40:53.026081 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 04:40:53.026087 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 04:40:53.026092 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 30 04:40:53.026098 kernel: ... version: 4 Apr 30 04:40:53.026103 kernel: ... bit width: 48 Apr 30 04:40:53.026108 kernel: ... generic registers: 4 Apr 30 04:40:53.026113 kernel: ... value mask: 0000ffffffffffff Apr 30 04:40:53.026119 kernel: ... max period: 00007fffffffffff Apr 30 04:40:53.026124 kernel: ... fixed-purpose events: 3 Apr 30 04:40:53.026130 kernel: ... event mask: 000000070000000f Apr 30 04:40:53.026136 kernel: signal: max sigframe size: 2032 Apr 30 04:40:53.026141 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 30 04:40:53.026146 kernel: rcu: Hierarchical SRCU implementation. Apr 30 04:40:53.026152 kernel: rcu: Max phase no-delay instances is 400. Apr 30 04:40:53.026157 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 30 04:40:53.026162 kernel: smp: Bringing up secondary CPUs ... Apr 30 04:40:53.026168 kernel: smpboot: x86: Booting SMP configuration: Apr 30 04:40:53.026173 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Apr 30 04:40:53.026180 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 04:40:53.026185 kernel: smp: Brought up 1 node, 16 CPUs Apr 30 04:40:53.026190 kernel: smpboot: Max logical packages: 1 Apr 30 04:40:53.026196 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 30 04:40:53.026201 kernel: devtmpfs: initialized Apr 30 04:40:53.026206 kernel: x86/mm: Memory block size: 128MB Apr 30 04:40:53.026212 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Apr 30 04:40:53.026217 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Apr 30 04:40:53.026222 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 04:40:53.026229 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 04:40:53.026234 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 04:40:53.026239 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 04:40:53.026245 kernel: audit: initializing netlink subsys (disabled) Apr 30 04:40:53.026250 kernel: audit: type=2000 audit(1745988047.039:1): state=initialized audit_enabled=0 res=1 Apr 30 04:40:53.026257 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 04:40:53.026262 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 04:40:53.026268 kernel: cpuidle: using governor menu Apr 30 04:40:53.026293 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 04:40:53.026299 kernel: dca service started, version 1.12.1 Apr 30 04:40:53.026319 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 04:40:53.026324 kernel: PCI: Using configuration type 1 for base access Apr 30 04:40:53.026330 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 30 04:40:53.026335 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 04:40:53.026340 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 04:40:53.026346 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 04:40:53.026351 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 04:40:53.026357 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 04:40:53.026362 kernel: ACPI: Added _OSI(Module Device) Apr 30 04:40:53.026368 kernel: ACPI: Added _OSI(Processor Device) Apr 30 04:40:53.026373 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 04:40:53.026378 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 04:40:53.026384 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 30 04:40:53.026389 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026394 kernel: ACPI: SSDT 0xFFFF957280E5AC00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 30 04:40:53.026400 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026405 kernel: ACPI: SSDT 0xFFFF957281E28000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 30 04:40:53.026411 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026417 kernel: ACPI: SSDT 0xFFFF957280E05200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 30 04:40:53.026422 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026427 kernel: ACPI: SSDT 0xFFFF957281E2B800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 30 04:40:53.026432 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026438 kernel: ACPI: SSDT 0xFFFF957280E70000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 30 04:40:53.026443 kernel: ACPI: Dynamic OEM Table Load: Apr 30 04:40:53.026448 kernel: ACPI: SSDT 0xFFFF957280E5D800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 30 04:40:53.026454 kernel: ACPI: _OSC evaluated successfully for all CPUs Apr 30 04:40:53.026460 kernel: ACPI: Interpreter enabled Apr 30 04:40:53.026465 kernel: ACPI: PM: (supports S0 S5) Apr 30 04:40:53.026470 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 04:40:53.026476 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 30 04:40:53.026481 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 30 04:40:53.026486 kernel: HEST: Table parsing has been initialized. Apr 30 04:40:53.026491 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 30 04:40:53.026497 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 04:40:53.026502 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 04:40:53.026508 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 30 04:40:53.026514 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Apr 30 04:40:53.026519 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Apr 30 04:40:53.026525 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Apr 30 04:40:53.026530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Apr 30 04:40:53.026535 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Apr 30 04:40:53.026541 kernel: ACPI: \_TZ_.FN00: New power resource Apr 30 04:40:53.026546 kernel: ACPI: \_TZ_.FN01: New power resource Apr 30 04:40:53.026551 kernel: ACPI: \_TZ_.FN02: New power resource Apr 30 04:40:53.026557 kernel: ACPI: \_TZ_.FN03: New power resource Apr 30 04:40:53.026563 kernel: ACPI: \_TZ_.FN04: New power resource Apr 30 04:40:53.026568 kernel: ACPI: \PIN_: New power resource Apr 30 04:40:53.026573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 30 04:40:53.026646 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 04:40:53.026700 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 30 04:40:53.026747 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 30 04:40:53.026755 kernel: PCI host bridge to bus 0000:00 Apr 30 04:40:53.026806 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 04:40:53.026850 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 04:40:53.026892 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 04:40:53.026934 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 30 04:40:53.026975 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 30 04:40:53.027017 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 30 04:40:53.027077 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 30 04:40:53.027133 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 30 04:40:53.027182 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.027235 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 30 04:40:53.027306 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 30 04:40:53.027375 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 30 04:40:53.027426 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 30 04:40:53.027479 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 30 04:40:53.027526 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 30 04:40:53.027574 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 30 04:40:53.027624 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 30 04:40:53.027673 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 30 04:40:53.027720 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 30 04:40:53.027774 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 30 04:40:53.027822 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 04:40:53.027877 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 30 04:40:53.027925 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 04:40:53.027975 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 30 04:40:53.028023 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 30 04:40:53.028072 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 30 04:40:53.028123 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 30 04:40:53.028179 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 30 04:40:53.028229 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 30 04:40:53.028316 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 30 04:40:53.028365 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 30 04:40:53.028414 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 30 04:40:53.028465 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 30 04:40:53.028513 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 30 04:40:53.028560 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 30 04:40:53.028606 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 30 04:40:53.028654 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 30 04:40:53.028701 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 30 04:40:53.028748 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 30 04:40:53.028798 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 30 04:40:53.028853 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 30 04:40:53.028902 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.028957 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 30 04:40:53.029009 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029062 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 30 04:40:53.029112 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029163 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 30 04:40:53.029212 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029287 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Apr 30 04:40:53.029359 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.029410 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 30 04:40:53.029458 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 04:40:53.029510 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 30 04:40:53.029564 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 30 04:40:53.029616 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 30 04:40:53.029663 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 30 04:40:53.029717 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 30 04:40:53.029766 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 30 04:40:53.029821 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Apr 30 04:40:53.029871 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 30 04:40:53.029920 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 30 04:40:53.029971 kernel: pci 0000:01:00.0: PME# supported from D3cold Apr 30 04:40:53.030021 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 04:40:53.030070 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 04:40:53.030124 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Apr 30 04:40:53.030173 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 30 04:40:53.030222 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 30 04:40:53.030294 kernel: pci 0000:01:00.1: PME# supported from D3cold Apr 30 04:40:53.030360 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 04:40:53.030409 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 04:40:53.030458 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 04:40:53.030507 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 04:40:53.030555 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 04:40:53.030603 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 04:40:53.030657 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Apr 30 04:40:53.030709 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Apr 30 04:40:53.030758 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 30 04:40:53.030806 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Apr 30 04:40:53.030855 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 30 04:40:53.030903 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.030953 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 04:40:53.031001 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 04:40:53.031049 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 04:40:53.031106 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 30 04:40:53.031155 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 30 04:40:53.031205 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 30 04:40:53.031253 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Apr 30 04:40:53.031342 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 30 04:40:53.031391 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 30 04:40:53.031440 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 04:40:53.031490 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 04:40:53.031539 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 04:40:53.031588 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 04:40:53.031644 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Apr 30 04:40:53.031695 kernel: pci 0000:06:00.0: enabling Extended Tags Apr 30 04:40:53.031744 kernel: pci 0000:06:00.0: supports D1 D2 Apr 30 04:40:53.031794 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 04:40:53.031844 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 04:40:53.031893 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.031940 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.031992 kernel: pci_bus 0000:07: extended config space not accessible Apr 30 04:40:53.032048 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Apr 30 04:40:53.032100 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 30 04:40:53.032151 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 30 04:40:53.032201 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Apr 30 04:40:53.032257 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 04:40:53.032353 kernel: pci 0000:07:00.0: supports D1 D2 Apr 30 04:40:53.032404 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 04:40:53.032455 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 04:40:53.032503 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.032553 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.032561 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 30 04:40:53.032569 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 30 04:40:53.032575 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 30 04:40:53.032580 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 30 04:40:53.032586 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 30 04:40:53.032592 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 30 04:40:53.032598 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 30 04:40:53.032603 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 30 04:40:53.032609 kernel: iommu: Default domain type: Translated Apr 30 04:40:53.032614 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 04:40:53.032621 kernel: PCI: Using ACPI for IRQ routing Apr 30 04:40:53.032627 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 04:40:53.032632 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 30 04:40:53.032639 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Apr 30 04:40:53.032644 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Apr 30 04:40:53.032650 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Apr 30 04:40:53.032655 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 30 04:40:53.032661 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 30 04:40:53.032709 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Apr 30 04:40:53.032763 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Apr 30 04:40:53.032814 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 04:40:53.032823 kernel: vgaarb: loaded Apr 30 04:40:53.032829 kernel: clocksource: Switched to clocksource tsc-early Apr 30 04:40:53.032834 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 04:40:53.032840 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 04:40:53.032846 kernel: pnp: PnP ACPI init Apr 30 04:40:53.032894 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 30 04:40:53.032945 kernel: pnp 00:02: [dma 0 disabled] Apr 30 04:40:53.032992 kernel: pnp 00:03: [dma 0 disabled] Apr 30 04:40:53.033042 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 30 04:40:53.033086 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 30 04:40:53.033133 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 30 04:40:53.033179 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 30 04:40:53.033226 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 30 04:40:53.033271 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 30 04:40:53.033361 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 30 04:40:53.033406 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 30 04:40:53.033451 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 30 04:40:53.033493 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 30 04:40:53.033538 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 30 04:40:53.033588 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 30 04:40:53.033632 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 30 04:40:53.033676 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 30 04:40:53.033718 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 30 04:40:53.033762 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 30 04:40:53.033804 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 30 04:40:53.033848 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 30 04:40:53.033896 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 30 04:40:53.033905 kernel: pnp: PnP ACPI: found 10 devices Apr 30 04:40:53.033911 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 04:40:53.033917 kernel: NET: Registered PF_INET protocol family Apr 30 04:40:53.033922 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 04:40:53.033928 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 30 04:40:53.033934 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 04:40:53.033940 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 04:40:53.033947 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 04:40:53.033953 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 30 04:40:53.033958 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.033964 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 04:40:53.033970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 04:40:53.033975 kernel: NET: Registered PF_XDP protocol family Apr 30 04:40:53.034024 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 30 04:40:53.034071 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 30 04:40:53.034120 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 30 04:40:53.034172 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034221 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034297 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034365 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 04:40:53.034414 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 04:40:53.034462 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 04:40:53.034510 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 04:40:53.034557 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 04:40:53.034608 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 04:40:53.034655 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 04:40:53.034703 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 04:40:53.034750 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 04:40:53.034801 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 04:40:53.034849 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 04:40:53.034897 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 04:40:53.034945 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 04:40:53.034995 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.035044 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035092 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 04:40:53.035141 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 04:40:53.035188 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035235 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 30 04:40:53.035324 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 04:40:53.035367 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 04:40:53.035409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 04:40:53.035452 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 30 04:40:53.035493 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 30 04:40:53.035543 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Apr 30 04:40:53.035588 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 04:40:53.035641 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Apr 30 04:40:53.035684 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Apr 30 04:40:53.035733 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 04:40:53.035777 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Apr 30 04:40:53.035825 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Apr 30 04:40:53.035868 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035917 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 30 04:40:53.035964 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 30 04:40:53.035972 kernel: PCI: CLS 64 bytes, default 64 Apr 30 04:40:53.035978 kernel: DMAR: No ATSR found Apr 30 04:40:53.035984 kernel: DMAR: No SATC found Apr 30 04:40:53.035989 kernel: DMAR: dmar0: Using Queued invalidation Apr 30 04:40:53.036037 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 30 04:40:53.036086 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 30 04:40:53.036136 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 30 04:40:53.036186 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 30 04:40:53.036234 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 30 04:40:53.036328 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 30 04:40:53.036375 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 30 04:40:53.036423 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 30 04:40:53.036470 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 30 04:40:53.036518 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 30 04:40:53.036566 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 30 04:40:53.036614 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 30 04:40:53.036661 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 30 04:40:53.036710 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 30 04:40:53.036759 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 30 04:40:53.036806 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 30 04:40:53.036855 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Apr 30 04:40:53.036901 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 30 04:40:53.036950 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 30 04:40:53.037000 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 30 04:40:53.037047 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 30 04:40:53.037096 kernel: pci 0000:01:00.0: Adding to iommu group 1 Apr 30 04:40:53.037146 kernel: pci 0000:01:00.1: Adding to iommu group 1 Apr 30 04:40:53.037194 kernel: pci 0000:03:00.0: Adding to iommu group 15 Apr 30 04:40:53.037244 kernel: pci 0000:04:00.0: Adding to iommu group 16 Apr 30 04:40:53.037343 kernel: pci 0000:06:00.0: Adding to iommu group 17 Apr 30 04:40:53.037394 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 30 04:40:53.037404 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 30 04:40:53.037410 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 04:40:53.037416 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Apr 30 04:40:53.037422 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 30 04:40:53.037428 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 30 04:40:53.037434 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 30 04:40:53.037439 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 30 04:40:53.037490 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 30 04:40:53.037501 kernel: Initialise system trusted keyrings Apr 30 04:40:53.037506 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 30 04:40:53.037512 kernel: Key type asymmetric registered Apr 30 04:40:53.037518 kernel: Asymmetric key parser 'x509' registered Apr 30 04:40:53.037523 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 04:40:53.037529 kernel: io scheduler mq-deadline registered Apr 30 04:40:53.037535 kernel: io scheduler kyber registered Apr 30 04:40:53.037540 kernel: io scheduler bfq registered Apr 30 04:40:53.037586 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 30 04:40:53.037637 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Apr 30 04:40:53.037684 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Apr 30 04:40:53.037732 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Apr 30 04:40:53.037779 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Apr 30 04:40:53.037827 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Apr 30 04:40:53.037883 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 30 04:40:53.037892 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 30 04:40:53.037900 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 30 04:40:53.037905 kernel: pstore: Using crash dump compression: deflate Apr 30 04:40:53.037911 kernel: pstore: Registered erst as persistent store backend Apr 30 04:40:53.037917 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 04:40:53.037923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 04:40:53.037929 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 04:40:53.037934 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 04:40:53.037940 kernel: hpet_acpi_add: no address or irqs in _CRS Apr 30 04:40:53.037988 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 30 04:40:53.037998 kernel: i8042: PNP: No PS/2 controller found. Apr 30 04:40:53.038041 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 30 04:40:53.038086 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 30 04:40:53.038131 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-04-30T04:40:51 UTC (1745988051) Apr 30 04:40:53.038174 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 30 04:40:53.038182 kernel: intel_pstate: Intel P-state driver initializing Apr 30 04:40:53.038188 kernel: intel_pstate: Disabling energy efficiency optimization Apr 30 04:40:53.038195 kernel: intel_pstate: HWP enabled Apr 30 04:40:53.038201 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Apr 30 04:40:53.038207 kernel: vesafb: scrolling: redraw Apr 30 04:40:53.038212 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Apr 30 04:40:53.038218 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000021dabbe3, using 768k, total 768k Apr 30 04:40:53.038224 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 04:40:53.038230 kernel: fb0: VESA VGA frame buffer device Apr 30 04:40:53.038235 kernel: NET: Registered PF_INET6 protocol family Apr 30 04:40:53.038241 kernel: Segment Routing with IPv6 Apr 30 04:40:53.038248 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 04:40:53.038254 kernel: NET: Registered PF_PACKET protocol family Apr 30 04:40:53.038262 kernel: Key type dns_resolver registered Apr 30 04:40:53.038267 kernel: microcode: Current revision: 0x000000fc Apr 30 04:40:53.038299 kernel: microcode: Updated early from: 0x000000f4 Apr 30 04:40:53.038304 kernel: microcode: Microcode Update Driver: v2.2. Apr 30 04:40:53.038330 kernel: IPI shorthand broadcast: enabled Apr 30 04:40:53.038336 kernel: sched_clock: Marking stable (2487000672, 1378401079)->(4414490257, -549088506) Apr 30 04:40:53.038341 kernel: registered taskstats version 1 Apr 30 04:40:53.038347 kernel: Loading compiled-in X.509 certificates Apr 30 04:40:53.038354 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 04:40:53.038359 kernel: Key type .fscrypt registered Apr 30 04:40:53.038365 kernel: Key type fscrypt-provisioning registered Apr 30 04:40:53.038371 kernel: ima: Allocated hash algorithm: sha1 Apr 30 04:40:53.038376 kernel: ima: No architecture policies found Apr 30 04:40:53.038382 kernel: clk: Disabling unused clocks Apr 30 04:40:53.038387 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 04:40:53.038393 kernel: Write protecting the kernel read-only data: 36864k Apr 30 04:40:53.038400 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 04:40:53.038405 kernel: Run /init as init process Apr 30 04:40:53.038411 kernel: with arguments: Apr 30 04:40:53.038417 kernel: /init Apr 30 04:40:53.038422 kernel: with environment: Apr 30 04:40:53.038428 kernel: HOME=/ Apr 30 04:40:53.038433 kernel: TERM=linux Apr 30 04:40:53.038439 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 04:40:53.038446 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 04:40:53.038454 systemd[1]: Detected architecture x86-64. Apr 30 04:40:53.038460 systemd[1]: Running in initrd. Apr 30 04:40:53.038466 systemd[1]: No hostname configured, using default hostname. Apr 30 04:40:53.038472 systemd[1]: Hostname set to . Apr 30 04:40:53.038477 systemd[1]: Initializing machine ID from random generator. Apr 30 04:40:53.038483 systemd[1]: Queued start job for default target initrd.target. Apr 30 04:40:53.038489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 04:40:53.038496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 04:40:53.038502 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 04:40:53.038508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 04:40:53.038514 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 04:40:53.038520 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 04:40:53.038527 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 04:40:53.038533 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Apr 30 04:40:53.038539 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Apr 30 04:40:53.038545 kernel: clocksource: Switched to clocksource tsc Apr 30 04:40:53.038551 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 04:40:53.038557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 04:40:53.038563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 04:40:53.038569 systemd[1]: Reached target paths.target - Path Units. Apr 30 04:40:53.038575 systemd[1]: Reached target slices.target - Slice Units. Apr 30 04:40:53.038581 systemd[1]: Reached target swap.target - Swaps. Apr 30 04:40:53.038587 systemd[1]: Reached target timers.target - Timer Units. Apr 30 04:40:53.038593 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 04:40:53.038599 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 04:40:53.038605 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 04:40:53.038611 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 04:40:53.038617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 04:40:53.038623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 04:40:53.038629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 04:40:53.038635 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 04:40:53.038641 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 04:40:53.038647 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 04:40:53.038653 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 04:40:53.038659 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 04:40:53.038665 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 04:40:53.038681 systemd-journald[267]: Collecting audit messages is disabled. Apr 30 04:40:53.038696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 04:40:53.038703 systemd-journald[267]: Journal started Apr 30 04:40:53.038716 systemd-journald[267]: Runtime Journal (/run/log/journal/271bb88c2fdd4a608d2f541669887d36) is 8.0M, max 639.9M, 631.9M free. Apr 30 04:40:53.052081 systemd-modules-load[269]: Inserted module 'overlay' Apr 30 04:40:53.074267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 04:40:53.102850 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 04:40:53.166454 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 04:40:53.166468 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 04:40:53.166477 kernel: Bridge firewalling registered Apr 30 04:40:53.144988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 04:40:53.163753 systemd-modules-load[269]: Inserted module 'br_netfilter' Apr 30 04:40:53.178583 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 04:40:53.189712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 04:40:53.214952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:40:53.249411 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 04:40:53.273485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 04:40:53.277124 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 04:40:53.277825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 04:40:53.283180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 04:40:53.284063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 04:40:53.284796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 04:40:53.285752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 04:40:53.286497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 04:40:53.289628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 04:40:53.302507 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:40:53.304624 systemd-resolved[299]: Positive Trust Anchors: Apr 30 04:40:53.304629 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 04:40:53.304655 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 04:40:53.306391 systemd-resolved[299]: Defaulting to hostname 'linux'. Apr 30 04:40:53.335676 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 04:40:53.352827 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 04:40:53.374555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 04:40:53.499899 dracut-cmdline[311]: dracut-dracut-053 Apr 30 04:40:53.508477 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 04:40:53.706288 kernel: SCSI subsystem initialized Apr 30 04:40:53.728260 kernel: Loading iSCSI transport class v2.0-870. Apr 30 04:40:53.751288 kernel: iscsi: registered transport (tcp) Apr 30 04:40:53.782069 kernel: iscsi: registered transport (qla4xxx) Apr 30 04:40:53.782086 kernel: QLogic iSCSI HBA Driver Apr 30 04:40:53.814828 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 04:40:53.827662 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 04:40:53.910183 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 04:40:53.910246 kernel: device-mapper: uevent: version 1.0.3 Apr 30 04:40:53.929818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 04:40:53.988292 kernel: raid6: avx2x4 gen() 53056 MB/s Apr 30 04:40:54.020331 kernel: raid6: avx2x2 gen() 53271 MB/s Apr 30 04:40:54.057042 kernel: raid6: avx2x1 gen() 44020 MB/s Apr 30 04:40:54.057062 kernel: raid6: using algorithm avx2x2 gen() 53271 MB/s Apr 30 04:40:54.105009 kernel: raid6: .... xor() 30344 MB/s, rmw enabled Apr 30 04:40:54.105026 kernel: raid6: using avx2x2 recovery algorithm Apr 30 04:40:54.145261 kernel: xor: automatically using best checksumming function avx Apr 30 04:40:54.258294 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 04:40:54.264449 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 04:40:54.287578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 04:40:54.294325 systemd-udevd[496]: Using default interface naming scheme 'v255'. Apr 30 04:40:54.298345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 04:40:54.335490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 04:40:54.381281 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Apr 30 04:40:54.399218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 04:40:54.410519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 04:40:54.505045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 04:40:54.530272 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 04:40:54.530310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 04:40:54.539219 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 04:40:54.566286 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 04:40:54.566312 kernel: PTP clock support registered Apr 30 04:40:54.579047 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 04:40:54.579153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:40:54.585262 kernel: libata version 3.00 loaded. Apr 30 04:40:54.607885 kernel: ACPI: bus type USB registered Apr 30 04:40:54.607904 kernel: usbcore: registered new interface driver usbfs Apr 30 04:40:54.617340 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 04:40:54.647350 kernel: usbcore: registered new interface driver hub Apr 30 04:40:54.647411 kernel: usbcore: registered new device driver usb Apr 30 04:40:54.667629 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 04:40:54.667668 kernel: AES CTR mode by8 optimization enabled Apr 30 04:40:54.677314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 04:40:54.677370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:40:54.695375 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 04:40:54.713787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 04:40:54.741486 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 04:40:55.888825 kernel: ahci 0000:00:17.0: version 3.0 Apr 30 04:40:55.888954 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 04:40:55.889055 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Apr 30 04:40:55.889234 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 30 04:40:55.889327 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Apr 30 04:40:55.889428 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 30 04:40:55.889534 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 04:40:55.889642 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 30 04:40:55.889720 kernel: scsi host0: ahci Apr 30 04:40:55.889785 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 04:40:55.889852 kernel: scsi host1: ahci Apr 30 04:40:55.889914 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 30 04:40:55.889984 kernel: scsi host2: ahci Apr 30 04:40:55.890050 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 30 04:40:55.890113 kernel: scsi host3: ahci Apr 30 04:40:55.890175 kernel: hub 1-0:1.0: USB hub found Apr 30 04:40:55.890254 kernel: scsi host4: ahci Apr 30 04:40:55.890326 kernel: hub 1-0:1.0: 16 ports detected Apr 30 04:40:55.890397 kernel: scsi host5: ahci Apr 30 04:40:55.890482 kernel: hub 2-0:1.0: USB hub found Apr 30 04:40:55.890599 kernel: scsi host6: ahci Apr 30 04:40:55.890670 kernel: hub 2-0:1.0: 10 ports detected Apr 30 04:40:55.890738 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Apr 30 04:40:55.890749 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 30 04:40:55.890757 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Apr 30 04:40:55.890765 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 30 04:40:55.890772 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Apr 30 04:40:55.890779 kernel: igb 0000:03:00.0: added PHC on eth0 Apr 30 04:40:55.890848 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Apr 30 04:40:55.890856 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 04:40:55.890921 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Apr 30 04:40:55.890929 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:b6 Apr 30 04:40:55.890993 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Apr 30 04:40:55.891001 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Apr 30 04:40:55.891063 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Apr 30 04:40:55.891071 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 04:40:55.891132 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Apr 30 04:40:55.891196 kernel: igb 0000:04:00.0: added PHC on eth1 Apr 30 04:40:55.891266 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Apr 30 04:40:55.891330 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 04:40:55.891399 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 30 04:40:55.891516 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:b7 Apr 30 04:40:55.891635 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Apr 30 04:40:55.891717 kernel: hub 1-14:1.0: USB hub found Apr 30 04:40:55.891790 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 04:40:55.891851 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:55.891859 kernel: hub 1-14:1.0: 4 ports detected Apr 30 04:40:55.891927 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 04:40:55.891935 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 04:40:55.891996 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 04:40:55.892004 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Apr 30 04:40:56.026074 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026085 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 04:40:56.026162 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Apr 30 04:40:56.026175 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026183 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026190 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 30 04:40:56.026305 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 04:40:56.026314 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Apr 30 04:40:56.026321 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 04:40:56.026329 kernel: ata1.00: Features: NCQ-prio Apr 30 04:40:56.026336 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 04:40:56.026346 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Apr 30 04:40:56.026416 kernel: ata2.00: Features: NCQ-prio Apr 30 04:40:56.026424 kernel: ata1.00: configured for UDMA/133 Apr 30 04:40:56.026432 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Apr 30 04:40:56.026496 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Apr 30 04:40:56.504293 kernel: ata2.00: configured for UDMA/133 Apr 30 04:40:56.504311 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 04:40:56.504348 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Apr 30 04:40:56.504518 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Apr 30 04:40:56.504612 kernel: usbcore: registered new interface driver usbhid Apr 30 04:40:56.504621 kernel: usbhid: USB HID core driver Apr 30 04:40:56.504629 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 30 04:40:56.504636 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Apr 30 04:40:56.504735 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.504750 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 04:40:56.504764 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 04:40:56.504869 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Apr 30 04:40:56.504970 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 04:40:56.505057 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Apr 30 04:40:56.505142 kernel: sd 1:0:0:0: [sda] Write Protect is off Apr 30 04:40:56.505229 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 30 04:40:56.505311 kernel: sd 0:0:0:0: [sdb] Write Protect is off Apr 30 04:40:56.505382 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 30 04:40:56.505448 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 04:40:56.505530 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 04:40:56.505606 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Apr 30 04:40:56.505681 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 30 04:40:56.505773 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 30 04:40:56.505784 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 30 04:40:56.505859 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 04:40:56.505871 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Apr 30 04:40:56.505936 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 04:40:56.506013 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Apr 30 04:40:56.506074 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.506083 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 04:40:56.506091 kernel: GPT:9289727 != 937703087 Apr 30 04:40:56.506098 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 04:40:56.506105 kernel: GPT:9289727 != 937703087 Apr 30 04:40:56.506114 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 04:40:56.506121 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:56.506128 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Apr 30 04:40:56.506189 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Apr 30 04:40:55.609082 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 04:40:56.603800 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (553) Apr 30 04:40:56.603815 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Apr 30 04:40:56.603908 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (561) Apr 30 04:40:55.888824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 04:40:55.888907 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 04:40:55.947393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 04:40:56.012470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:40:56.043491 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 04:40:56.369164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 04:40:56.546259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Apr 30 04:40:56.607501 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Apr 30 04:40:56.611813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Apr 30 04:40:56.615650 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Apr 30 04:40:56.629500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Apr 30 04:40:56.652659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:40:56.696729 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 04:40:56.738382 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.738398 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:56.738459 disk-uuid[719]: Primary Header is updated. Apr 30 04:40:56.738459 disk-uuid[719]: Secondary Entries is updated. Apr 30 04:40:56.738459 disk-uuid[719]: Secondary Header is updated. Apr 30 04:40:56.791342 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.791355 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:56.791363 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:56.818302 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:57.797175 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 04:40:57.816926 disk-uuid[720]: The operation has completed successfully. Apr 30 04:40:57.826391 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 04:40:57.853560 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 04:40:57.853624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 04:40:57.892502 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 04:40:57.929382 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 04:40:57.929440 sh[738]: Success Apr 30 04:40:57.962711 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 04:40:57.989441 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 04:40:57.997657 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 04:40:58.055326 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 04:40:58.055347 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:40:58.076106 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 04:40:58.094246 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 04:40:58.111314 kernel: BTRFS info (device dm-0): using free space tree Apr 30 04:40:58.148306 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 04:40:58.149082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 04:40:58.157777 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 04:40:58.165606 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 04:40:58.188695 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 04:40:58.228273 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:40:58.228294 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:40:58.245665 kernel: BTRFS info (device sdb6): using free space tree Apr 30 04:40:58.280197 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 04:40:58.338517 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 04:40:58.338535 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 04:40:58.338543 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:40:58.327625 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 04:40:58.360486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 04:40:58.371041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 04:40:58.427903 ignition[920]: Ignition 2.19.0 Apr 30 04:40:58.427908 ignition[920]: Stage: fetch-offline Apr 30 04:40:58.430099 unknown[920]: fetched base config from "system" Apr 30 04:40:58.427931 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 30 04:40:58.430103 unknown[920]: fetched user config from "system" Apr 30 04:40:58.427937 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:40:58.430739 systemd-networkd[922]: lo: Link UP Apr 30 04:40:58.427989 ignition[920]: parsed url from cmdline: "" Apr 30 04:40:58.430741 systemd-networkd[922]: lo: Gained carrier Apr 30 04:40:58.427991 ignition[920]: no config URL provided Apr 30 04:40:58.431015 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 04:40:58.427994 ignition[920]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 04:40:58.433036 systemd-networkd[922]: Enumeration completed Apr 30 04:40:58.428016 ignition[920]: parsing config with SHA512: 52390592a6561c3e7e03ff11e67fc20a2533d35e23a56bfdf2da69f22fcf0ab2057fb6db72041664619ba82d0e4b4fb516e2623dbc85564656b13a2cbac60527 Apr 30 04:40:58.433484 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 04:40:58.430328 ignition[920]: fetch-offline: fetch-offline passed Apr 30 04:40:58.433876 systemd-networkd[922]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.430331 ignition[920]: POST message to Packet Timeline Apr 30 04:40:58.459800 systemd[1]: Reached target network.target - Network. Apr 30 04:40:58.430333 ignition[920]: POST Status error: resource requires networking Apr 30 04:40:58.462767 systemd-networkd[922]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.430368 ignition[920]: Ignition finished successfully Apr 30 04:40:58.466487 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 04:40:58.490531 ignition[935]: Ignition 2.19.0 Apr 30 04:40:58.473580 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 04:40:58.677445 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 04:40:58.490538 ignition[935]: Stage: kargs Apr 30 04:40:58.490925 systemd-networkd[922]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.490649 ignition[935]: no configs at "/usr/lib/ignition/base.d" Apr 30 04:40:58.669196 systemd-networkd[922]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 04:40:58.490655 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:40:58.491189 ignition[935]: kargs: kargs passed Apr 30 04:40:58.491192 ignition[935]: POST message to Packet Timeline Apr 30 04:40:58.491202 ignition[935]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:40:58.491723 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51940->[::1]:53: read: connection refused Apr 30 04:40:58.691831 ignition[935]: GET https://metadata.packet.net/metadata: attempt #2 Apr 30 04:40:58.692447 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56842->[::1]:53: read: connection refused Apr 30 04:40:58.859300 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 04:40:58.860636 systemd-networkd[922]: eno1: Link UP Apr 30 04:40:58.860773 systemd-networkd[922]: eno2: Link UP Apr 30 04:40:58.860900 systemd-networkd[922]: enp1s0f0np0: Link UP Apr 30 04:40:58.861049 systemd-networkd[922]: enp1s0f0np0: Gained carrier Apr 30 04:40:58.876519 systemd-networkd[922]: enp1s0f1np1: Link UP Apr 30 04:40:58.908506 systemd-networkd[922]: enp1s0f0np0: DHCPv4 address 147.75.90.169/31, gateway 147.75.90.168 acquired from 145.40.83.140 Apr 30 04:40:59.093271 ignition[935]: GET https://metadata.packet.net/metadata: attempt #3 Apr 30 04:40:59.094347 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56785->[::1]:53: read: connection refused Apr 30 04:40:59.690048 systemd-networkd[922]: enp1s0f1np1: Gained carrier Apr 30 04:40:59.894780 ignition[935]: GET https://metadata.packet.net/metadata: attempt #4 Apr 30 04:40:59.895988 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39603->[::1]:53: read: connection refused Apr 30 04:41:00.265863 systemd-networkd[922]: enp1s0f0np0: Gained IPv6LL Apr 30 04:41:00.969879 systemd-networkd[922]: enp1s0f1np1: Gained IPv6LL Apr 30 04:41:01.497386 ignition[935]: GET https://metadata.packet.net/metadata: attempt #5 Apr 30 04:41:01.498473 ignition[935]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53698->[::1]:53: read: connection refused Apr 30 04:41:04.700988 ignition[935]: GET https://metadata.packet.net/metadata: attempt #6 Apr 30 04:41:05.699461 ignition[935]: GET result: OK Apr 30 04:41:06.106492 ignition[935]: Ignition finished successfully Apr 30 04:41:06.110750 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 04:41:06.132535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 04:41:06.142714 ignition[953]: Ignition 2.19.0 Apr 30 04:41:06.142721 ignition[953]: Stage: disks Apr 30 04:41:06.142941 ignition[953]: no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:06.142950 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:06.143650 ignition[953]: disks: disks passed Apr 30 04:41:06.143653 ignition[953]: POST message to Packet Timeline Apr 30 04:41:06.143665 ignition[953]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:07.112546 ignition[953]: GET result: OK Apr 30 04:41:07.449517 ignition[953]: Ignition finished successfully Apr 30 04:41:07.451106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 04:41:07.468456 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 04:41:07.487597 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 04:41:07.509672 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 04:41:07.521828 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 04:41:07.549642 systemd[1]: Reached target basic.target - Basic System. Apr 30 04:41:07.583546 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 04:41:07.617201 systemd-fsck[970]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 04:41:07.628633 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 04:41:07.651492 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 04:41:07.748849 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 04:41:07.764485 kernel: EXT4-fs (sdb9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 04:41:07.757684 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 04:41:07.785585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 04:41:07.794029 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 04:41:07.815260 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (980) Apr 30 04:41:07.845459 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:41:07.845475 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:41:07.846014 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 04:41:07.908371 kernel: BTRFS info (device sdb6): using free space tree Apr 30 04:41:07.908382 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 04:41:07.908390 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 04:41:07.918536 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Apr 30 04:41:07.919576 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 04:41:07.919593 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 04:41:07.985486 coreos-metadata[982]: Apr 30 04:41:07.966 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 04:41:07.939394 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 04:41:08.014370 coreos-metadata[998]: Apr 30 04:41:07.966 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 04:41:07.975555 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 04:41:08.008526 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 04:41:08.054506 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 04:41:08.064350 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Apr 30 04:41:08.074390 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 04:41:08.084398 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 04:41:08.091867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 04:41:08.126520 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 04:41:08.165469 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:41:08.155924 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 04:41:08.174048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 04:41:08.189388 ignition[1104]: INFO : Ignition 2.19.0 Apr 30 04:41:08.189388 ignition[1104]: INFO : Stage: mount Apr 30 04:41:08.189388 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:08.189388 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:08.189388 ignition[1104]: INFO : mount: mount passed Apr 30 04:41:08.189388 ignition[1104]: INFO : POST message to Packet Timeline Apr 30 04:41:08.189388 ignition[1104]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:08.190358 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 04:41:08.837671 coreos-metadata[998]: Apr 30 04:41:08.837 INFO Fetch successful Apr 30 04:41:08.872145 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 30 04:41:08.872215 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Apr 30 04:41:08.934866 coreos-metadata[982]: Apr 30 04:41:08.934 INFO Fetch successful Apr 30 04:41:08.967270 coreos-metadata[982]: Apr 30 04:41:08.967 INFO wrote hostname ci-4081.3.3-a-671b97f93d to /sysroot/etc/hostname Apr 30 04:41:08.968815 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 04:41:09.241203 ignition[1104]: INFO : GET result: OK Apr 30 04:41:09.593027 ignition[1104]: INFO : Ignition finished successfully Apr 30 04:41:09.595770 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 04:41:09.633776 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 04:41:09.650542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 04:41:09.691262 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1129) Apr 30 04:41:09.719568 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 04:41:09.719585 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 04:41:09.736307 kernel: BTRFS info (device sdb6): using free space tree Apr 30 04:41:09.772867 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 04:41:09.772883 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 04:41:09.785206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 04:41:09.822082 ignition[1146]: INFO : Ignition 2.19.0 Apr 30 04:41:09.822082 ignition[1146]: INFO : Stage: files Apr 30 04:41:09.836525 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:09.836525 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:09.836525 ignition[1146]: DEBUG : files: compiled without relabeling support, skipping Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 04:41:09.836525 ignition[1146]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 04:41:09.836525 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 04:41:09.826022 unknown[1146]: wrote ssh authorized keys file for user: core Apr 30 04:41:09.971521 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 04:41:10.245349 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 04:41:10.245349 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:10.278561 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 04:41:10.823931 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 04:41:11.053189 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 04:41:11.053189 ignition[1146]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 04:41:11.084500 ignition[1146]: INFO : files: files passed Apr 30 04:41:11.084500 ignition[1146]: INFO : POST message to Packet Timeline Apr 30 04:41:11.084500 ignition[1146]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:12.039659 ignition[1146]: INFO : GET result: OK Apr 30 04:41:12.398080 ignition[1146]: INFO : Ignition finished successfully Apr 30 04:41:12.401302 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 04:41:12.434527 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 04:41:12.434945 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 04:41:12.463758 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 04:41:12.463839 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 04:41:12.503723 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 04:41:12.503723 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 04:41:12.517636 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 04:41:12.505868 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 04:41:12.542754 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 04:41:12.582429 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 04:41:12.626150 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 04:41:12.626198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 04:41:12.644752 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 04:41:12.655530 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 04:41:12.682566 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 04:41:12.697491 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 04:41:12.749350 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 04:41:12.775514 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 04:41:12.791117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 04:41:12.798551 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 04:41:12.829672 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 04:41:12.848965 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 04:41:12.849385 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 04:41:12.877080 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 04:41:12.898832 systemd[1]: Stopped target basic.target - Basic System. Apr 30 04:41:12.916855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 04:41:12.935833 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 04:41:12.958987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 04:41:12.979875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 04:41:12.999976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 04:41:13.020905 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 04:41:13.041892 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 04:41:13.061865 systemd[1]: Stopped target swap.target - Swaps. Apr 30 04:41:13.081866 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 04:41:13.082294 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 04:41:13.118748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 04:41:13.128870 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 04:41:13.149746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 04:41:13.150193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 04:41:13.173764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 04:41:13.174160 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 04:41:13.205835 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 04:41:13.206314 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 04:41:13.226086 systemd[1]: Stopped target paths.target - Path Units. Apr 30 04:41:13.244734 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 04:41:13.245171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 04:41:13.266993 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 04:41:13.285985 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 04:41:13.305948 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 04:41:13.306253 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 04:41:13.325910 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 04:41:13.326204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 04:41:13.348948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 04:41:13.470491 ignition[1211]: INFO : Ignition 2.19.0 Apr 30 04:41:13.470491 ignition[1211]: INFO : Stage: umount Apr 30 04:41:13.470491 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 04:41:13.470491 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 04:41:13.470491 ignition[1211]: INFO : umount: umount passed Apr 30 04:41:13.470491 ignition[1211]: INFO : POST message to Packet Timeline Apr 30 04:41:13.470491 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 04:41:13.349366 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 04:41:13.369960 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 04:41:13.370356 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 04:41:13.387936 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 04:41:13.388340 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 04:41:13.423526 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 04:41:13.438392 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 04:41:13.438609 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 04:41:13.470581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 04:41:13.478481 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 04:41:13.478831 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 04:41:13.486078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 04:41:13.486507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 04:41:13.535728 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 04:41:13.536102 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 04:41:13.536151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 04:41:13.545499 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 04:41:13.545553 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 04:41:14.268678 ignition[1211]: INFO : GET result: OK Apr 30 04:41:14.703641 ignition[1211]: INFO : Ignition finished successfully Apr 30 04:41:14.706195 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 04:41:14.706474 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 04:41:14.724570 systemd[1]: Stopped target network.target - Network. Apr 30 04:41:14.739513 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 04:41:14.739774 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 04:41:14.757786 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 04:41:14.757948 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 04:41:14.775748 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 04:41:14.775906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 04:41:14.783918 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 04:41:14.784081 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 04:41:14.811759 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 04:41:14.811927 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 04:41:14.820305 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 04:41:14.829398 systemd-networkd[922]: enp1s0f1np1: DHCPv6 lease lost Apr 30 04:41:14.836494 systemd-networkd[922]: enp1s0f0np0: DHCPv6 lease lost Apr 30 04:41:14.846844 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 04:41:14.865427 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 04:41:14.865698 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 04:41:14.884712 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 04:41:14.885071 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 04:41:14.905078 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 04:41:14.905298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 04:41:14.935450 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 04:41:14.961406 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 04:41:14.961449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 04:41:14.980519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 04:41:14.980606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 04:41:14.998695 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 04:41:14.998857 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 04:41:15.018744 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 04:41:15.018910 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 04:41:15.038875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 04:41:15.060541 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 04:41:15.060908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 04:41:15.089774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 04:41:15.089811 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 04:41:15.116371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 04:41:15.116400 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 04:41:15.136486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 04:41:15.136572 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 04:41:15.167446 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 04:41:15.167615 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 04:41:15.205439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 04:41:15.205598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 04:41:15.252454 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 04:41:15.275415 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 04:41:15.482428 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Apr 30 04:41:15.275562 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 04:41:15.297553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 04:41:15.297678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:41:15.319572 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 04:41:15.319803 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 04:41:15.349113 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 04:41:15.349397 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 04:41:15.362593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 04:41:15.399701 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 04:41:15.422943 systemd[1]: Switching root. Apr 30 04:41:15.576409 systemd-journald[267]: Journal stopped Apr 30 04:41:18.172654 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 04:41:18.172669 kernel: SELinux: policy capability open_perms=1 Apr 30 04:41:18.172676 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 04:41:18.172683 kernel: SELinux: policy capability always_check_network=0 Apr 30 04:41:18.172688 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 04:41:18.172693 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 04:41:18.172699 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 04:41:18.172704 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 04:41:18.172709 kernel: audit: type=1403 audit(1745988075.785:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 04:41:18.172716 systemd[1]: Successfully loaded SELinux policy in 157.700ms. Apr 30 04:41:18.172724 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.892ms. Apr 30 04:41:18.172730 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 04:41:18.172736 systemd[1]: Detected architecture x86-64. Apr 30 04:41:18.172742 systemd[1]: Detected first boot. Apr 30 04:41:18.172748 systemd[1]: Hostname set to . Apr 30 04:41:18.172755 systemd[1]: Initializing machine ID from random generator. Apr 30 04:41:18.172763 zram_generator::config[1262]: No configuration found. Apr 30 04:41:18.172770 systemd[1]: Populated /etc with preset unit settings. Apr 30 04:41:18.172776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 04:41:18.172782 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 04:41:18.172788 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 04:41:18.172795 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 04:41:18.172802 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 04:41:18.172808 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 04:41:18.172814 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 04:41:18.172821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 04:41:18.172827 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 04:41:18.172833 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 04:41:18.172839 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 04:41:18.172847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 04:41:18.172853 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 04:41:18.172860 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 04:41:18.172866 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 04:41:18.172872 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 04:41:18.172878 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 04:41:18.172885 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Apr 30 04:41:18.172891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 04:41:18.172898 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 04:41:18.172904 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 04:41:18.172911 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 04:41:18.172918 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 04:41:18.172925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 04:41:18.172932 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 04:41:18.172938 systemd[1]: Reached target slices.target - Slice Units. Apr 30 04:41:18.172945 systemd[1]: Reached target swap.target - Swaps. Apr 30 04:41:18.172952 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 04:41:18.172958 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 04:41:18.172965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 04:41:18.172971 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 04:41:18.172977 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 04:41:18.172985 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 04:41:18.172992 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 04:41:18.172998 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 04:41:18.173005 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 04:41:18.173011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:18.173018 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 04:41:18.173024 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 04:41:18.173032 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 04:41:18.173040 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 04:41:18.173047 systemd[1]: Reached target machines.target - Containers. Apr 30 04:41:18.173053 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 04:41:18.173060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 04:41:18.173066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 04:41:18.173073 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 04:41:18.173079 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 04:41:18.173086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 04:41:18.173093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 04:41:18.173100 kernel: ACPI: bus type drm_connector registered Apr 30 04:41:18.173106 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 04:41:18.173112 kernel: fuse: init (API version 7.39) Apr 30 04:41:18.173118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 04:41:18.173125 kernel: loop: module loaded Apr 30 04:41:18.173131 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 04:41:18.173137 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 04:41:18.173145 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 04:41:18.173151 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 04:41:18.173158 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 04:41:18.173164 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 04:41:18.173178 systemd-journald[1365]: Collecting audit messages is disabled. Apr 30 04:41:18.173194 systemd-journald[1365]: Journal started Apr 30 04:41:18.173208 systemd-journald[1365]: Runtime Journal (/run/log/journal/12112a6641004ffeb19c916b5943eb1b) is 8.0M, max 639.9M, 631.9M free. Apr 30 04:41:16.298761 systemd[1]: Queued start job for default target multi-user.target. Apr 30 04:41:16.314632 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Apr 30 04:41:16.314889 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 04:41:18.201291 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 04:41:18.237316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 04:41:18.271307 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 04:41:18.305316 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 04:41:18.340003 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 04:41:18.340040 systemd[1]: Stopped verity-setup.service. Apr 30 04:41:18.402305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:18.423442 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 04:41:18.432798 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 04:41:18.442511 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 04:41:18.452518 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 04:41:18.462515 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 04:41:18.472474 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 04:41:18.482505 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 04:41:18.492598 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 04:41:18.503649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 04:41:18.514811 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 04:41:18.514990 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 04:41:18.527149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 04:41:18.527712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 04:41:18.539235 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 04:41:18.539606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 04:41:18.551145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 04:41:18.551514 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 04:41:18.563130 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 04:41:18.563491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 04:41:18.575117 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 04:41:18.575476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 04:41:18.587116 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 04:41:18.598112 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 04:41:18.610099 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 04:41:18.622087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 04:41:18.641767 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 04:41:18.664471 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 04:41:18.675204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 04:41:18.685376 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 04:41:18.685406 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 04:41:18.696794 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 04:41:18.725510 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 04:41:18.737083 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 04:41:18.746511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 04:41:18.755322 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 04:41:18.766461 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 04:41:18.777395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 04:41:18.778088 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 04:41:18.780877 systemd-journald[1365]: Time spent on flushing to /var/log/journal/12112a6641004ffeb19c916b5943eb1b is 14.784ms for 1368 entries. Apr 30 04:41:18.780877 systemd-journald[1365]: System Journal (/var/log/journal/12112a6641004ffeb19c916b5943eb1b) is 8.0M, max 195.6M, 187.6M free. Apr 30 04:41:18.820559 systemd-journald[1365]: Received client request to flush runtime journal. Apr 30 04:41:18.795401 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 04:41:18.796029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 04:41:18.831465 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 04:41:18.849080 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 04:41:18.857294 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 04:41:18.882420 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 04:41:18.900324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 04:41:18.903262 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 04:41:18.914426 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 04:41:18.925463 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 04:41:18.936494 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 04:41:18.947472 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 04:41:18.964480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 04:41:18.969262 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 04:41:18.978468 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 04:41:18.991523 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 04:41:19.025498 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 04:41:19.037284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 04:41:19.058947 udevadm[1403]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 04:41:19.059263 kernel: loop2: detected capacity change from 0 to 8 Apr 30 04:41:19.059843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 04:41:19.060165 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 04:41:19.070765 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Apr 30 04:41:19.070776 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Apr 30 04:41:19.073070 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 04:41:19.112917 ldconfig[1392]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 04:41:19.114647 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 04:41:19.116314 kernel: loop3: detected capacity change from 0 to 142488 Apr 30 04:41:19.195311 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 04:41:19.201592 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 04:41:19.227316 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 04:41:19.240520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 04:41:19.258302 kernel: loop6: detected capacity change from 0 to 8 Apr 30 04:41:19.266876 systemd-udevd[1424]: Using default interface naming scheme 'v255'. Apr 30 04:41:19.278322 kernel: loop7: detected capacity change from 0 to 142488 Apr 30 04:41:19.281959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 04:41:19.294350 (sd-merge)[1422]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Apr 30 04:41:19.294720 (sd-merge)[1422]: Merged extensions into '/usr'. Apr 30 04:41:19.306132 systemd[1]: Reloading requested from client PID 1399 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 04:41:19.306140 systemd[1]: Reloading... Apr 30 04:41:19.308267 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Apr 30 04:41:19.308319 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1487) Apr 30 04:41:19.308337 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 04:41:19.352192 zram_generator::config[1533]: No configuration found. Apr 30 04:41:19.352291 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 04:41:19.414268 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 04:41:19.414324 kernel: ACPI: button: Power Button [PWRF] Apr 30 04:41:19.465652 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Apr 30 04:41:19.497008 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Apr 30 04:41:19.497168 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Apr 30 04:41:19.471489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 04:41:19.505298 kernel: iTCO_vendor_support: vendor-support=0 Apr 30 04:41:19.505326 kernel: IPMI message handler: version 39.2 Apr 30 04:41:19.528036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Apr 30 04:41:19.539260 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Apr 30 04:41:19.539370 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Apr 30 04:41:19.555388 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Apr 30 04:41:19.555539 systemd[1]: Reloading finished in 249 ms. Apr 30 04:41:19.568263 kernel: ipmi device interface Apr 30 04:41:19.615553 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Apr 30 04:41:19.622388 kernel: ipmi_si: IPMI System Interface driver Apr 30 04:41:19.622402 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Apr 30 04:41:19.622473 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Apr 30 04:41:19.706981 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Apr 30 04:41:19.706995 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Apr 30 04:41:19.707004 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Apr 30 04:41:19.776847 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Apr 30 04:41:19.776931 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Apr 30 04:41:19.777003 kernel: ipmi_si: Adding ACPI-specified kcs state machine Apr 30 04:41:19.777015 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Apr 30 04:41:19.801227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 04:41:19.830464 kernel: intel_rapl_common: Found RAPL domain package Apr 30 04:41:19.830500 kernel: intel_rapl_common: Found RAPL domain core Apr 30 04:41:19.830521 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Apr 30 04:41:19.830647 kernel: intel_rapl_common: Found RAPL domain dram Apr 30 04:41:19.878265 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Apr 30 04:41:19.920389 systemd[1]: Starting ensure-sysext.service... Apr 30 04:41:19.927923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 04:41:19.939245 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 04:41:19.948914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 04:41:19.949474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 04:41:19.950752 systemd[1]: Reloading requested from client PID 1602 ('systemctl') (unit ensure-sysext.service)... Apr 30 04:41:19.950759 systemd[1]: Reloading... Apr 30 04:41:19.983264 zram_generator::config[1636]: No configuration found. Apr 30 04:41:20.004783 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 04:41:20.004996 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 04:41:20.005498 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 04:41:20.005666 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Apr 30 04:41:20.005703 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Apr 30 04:41:20.007282 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 04:41:20.007286 systemd-tmpfiles[1609]: Skipping /boot Apr 30 04:41:20.011373 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 04:41:20.011377 systemd-tmpfiles[1609]: Skipping /boot Apr 30 04:41:20.038040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 04:41:20.046262 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Apr 30 04:41:20.064263 kernel: ipmi_ssif: IPMI SSIF Interface driver Apr 30 04:41:20.094413 systemd[1]: Reloading finished in 143 ms. Apr 30 04:41:20.108796 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 04:41:20.126539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 04:41:20.137527 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 04:41:20.148488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 04:41:20.174460 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 04:41:20.185230 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 04:41:20.191155 augenrules[1719]: No rules Apr 30 04:41:20.197135 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 04:41:20.209051 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 04:41:20.215385 lvm[1724]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 04:41:20.221561 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 04:41:20.233063 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 04:41:20.246227 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 04:41:20.256943 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 04:41:20.266661 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 04:41:20.276699 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 04:41:20.288601 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 04:41:20.300539 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 04:41:20.302309 systemd-networkd[1607]: lo: Link UP Apr 30 04:41:20.302313 systemd-networkd[1607]: lo: Gained carrier Apr 30 04:41:20.304751 systemd-networkd[1607]: bond0: netdev ready Apr 30 04:41:20.305706 systemd-networkd[1607]: Enumeration completed Apr 30 04:41:20.308569 systemd-networkd[1607]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:20.network. Apr 30 04:41:20.312446 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 04:41:20.326953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 04:41:20.336375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:20.336520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 04:41:20.337327 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 04:41:20.348896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 04:41:20.350804 lvm[1742]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 04:41:20.366045 systemd-resolved[1726]: Positive Trust Anchors: Apr 30 04:41:20.366051 systemd-resolved[1726]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 04:41:20.366075 systemd-resolved[1726]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 04:41:20.368693 systemd-resolved[1726]: Using system hostname 'ci-4081.3.3-a-671b97f93d'. Apr 30 04:41:20.377748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 04:41:20.389906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 04:41:20.400336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 04:41:20.408785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 04:41:20.421075 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 04:41:20.430452 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 04:41:20.430557 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:20.431951 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 04:41:20.444844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 04:41:20.444942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 04:41:20.456914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 04:41:20.457039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 04:41:20.471124 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 04:41:20.471303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 04:41:20.482789 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 04:41:20.494906 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 04:41:20.517334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:20.517575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 04:41:20.523316 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 04:41:20.556296 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Apr 30 04:41:20.556373 systemd-networkd[1607]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f6:21.network. Apr 30 04:41:20.558004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 04:41:20.567931 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 04:41:20.579982 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 04:41:20.589403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 04:41:20.589474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 04:41:20.589521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:20.590082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 04:41:20.590171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 04:41:20.602597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 04:41:20.602670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 04:41:20.613561 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 04:41:20.613630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 04:41:20.625593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:20.625716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 04:41:20.642707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 04:41:20.654391 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 04:41:20.662940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 04:41:20.675906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 04:41:20.685699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 04:41:20.686051 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 04:41:20.686332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 04:41:20.689313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 04:41:20.689656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 04:41:20.705033 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 04:41:20.705382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 04:41:20.724291 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 04:41:20.742538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 04:41:20.742890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 04:41:20.756883 systemd-networkd[1607]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Apr 30 04:41:20.757277 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Apr 30 04:41:20.759003 systemd-networkd[1607]: enp1s0f0np0: Link UP Apr 30 04:41:20.759571 systemd-networkd[1607]: enp1s0f0np0: Gained carrier Apr 30 04:41:20.779312 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 30 04:41:20.779481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 04:41:20.785493 systemd-networkd[1607]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f6:20.network. Apr 30 04:41:20.785628 systemd-networkd[1607]: enp1s0f1np1: Link UP Apr 30 04:41:20.785767 systemd-networkd[1607]: enp1s0f1np1: Gained carrier Apr 30 04:41:20.789654 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 04:41:20.789728 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 04:41:20.800202 systemd[1]: Finished ensure-sysext.service. Apr 30 04:41:20.800417 systemd-networkd[1607]: bond0: Link UP Apr 30 04:41:20.800569 systemd-networkd[1607]: bond0: Gained carrier Apr 30 04:41:20.809708 systemd[1]: Reached target network.target - Network. Apr 30 04:41:20.818393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 04:41:20.829387 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 04:41:20.829416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 04:41:20.845411 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 04:41:20.889049 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Apr 30 04:41:20.889071 kernel: bond0: active interface up! Apr 30 04:41:20.892585 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 04:41:20.910523 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 04:41:20.920386 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 04:41:20.931344 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 04:41:20.942331 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 04:41:20.953323 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 04:41:20.953338 systemd[1]: Reached target paths.target - Path Units. Apr 30 04:41:20.961324 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 04:41:20.970391 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 04:41:20.980375 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 04:41:20.991322 systemd[1]: Reached target timers.target - Timer Units. Apr 30 04:41:20.999787 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 04:41:21.018059 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 04:41:21.024332 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Apr 30 04:41:21.032877 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 04:41:21.042634 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 04:41:21.052411 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 04:41:21.062358 systemd[1]: Reached target basic.target - Basic System. Apr 30 04:41:21.070368 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 04:41:21.070398 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 04:41:21.080346 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 04:41:21.090021 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 04:41:21.099880 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 04:41:21.108925 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 04:41:21.112405 coreos-metadata[1775]: Apr 30 04:41:21.112 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 04:41:21.118916 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 04:41:21.119939 dbus-daemon[1776]: [system] SELinux support is enabled Apr 30 04:41:21.120701 jq[1779]: false Apr 30 04:41:21.128388 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 04:41:21.128979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 04:41:21.136602 extend-filesystems[1781]: Found loop4 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found loop5 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found loop6 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found loop7 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sda Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb1 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb2 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb3 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found usr Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb4 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb6 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb7 Apr 30 04:41:21.138435 extend-filesystems[1781]: Found sdb9 Apr 30 04:41:21.138435 extend-filesystems[1781]: Checking size of /dev/sdb9 Apr 30 04:41:21.327351 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Apr 30 04:41:21.327387 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1500) Apr 30 04:41:21.327437 extend-filesystems[1781]: Resized partition /dev/sdb9 Apr 30 04:41:21.138977 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 04:41:21.351499 extend-filesystems[1789]: resize2fs 1.47.1 (20-May-2024) Apr 30 04:41:21.185741 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 04:41:21.214926 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 04:41:21.223651 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 04:41:21.252156 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Apr 30 04:41:21.266582 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 04:41:21.368758 update_engine[1806]: I20250430 04:41:21.321763 1806 main.cc:92] Flatcar Update Engine starting Apr 30 04:41:21.368758 update_engine[1806]: I20250430 04:41:21.322535 1806 update_check_scheduler.cc:74] Next update check in 6m30s Apr 30 04:41:21.266939 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 04:41:21.368968 jq[1807]: true Apr 30 04:41:21.288527 systemd-logind[1801]: Watching system buttons on /dev/input/event3 (Power Button) Apr 30 04:41:21.288537 systemd-logind[1801]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 04:41:21.288546 systemd-logind[1801]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Apr 30 04:41:21.288882 systemd-logind[1801]: New seat seat0. Apr 30 04:41:21.314395 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 04:41:21.343574 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 04:41:21.361574 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 04:41:21.383457 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 04:41:21.383546 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 04:41:21.383719 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 04:41:21.383805 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 04:41:21.393738 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 04:41:21.393826 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 04:41:21.407206 (ntainerd)[1811]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 04:41:21.408647 jq[1810]: true Apr 30 04:41:21.410524 dbus-daemon[1776]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 04:41:21.413302 tar[1809]: linux-amd64/helm Apr 30 04:41:21.419812 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Apr 30 04:41:21.419927 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Apr 30 04:41:21.420021 systemd[1]: Started update-engine.service - Update Engine. Apr 30 04:41:21.430777 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 04:41:21.430936 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 04:41:21.441424 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 04:41:21.441565 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 04:41:21.468570 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 04:41:21.468694 bash[1838]: Updated "/home/core/.ssh/authorized_keys" Apr 30 04:41:21.481695 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 04:41:21.493801 locksmithd[1840]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 04:41:21.494305 systemd[1]: Starting sshkeys.service... Apr 30 04:41:21.507931 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 04:41:21.519206 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 04:41:21.540105 sshd_keygen[1804]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 04:41:21.541445 coreos-metadata[1847]: Apr 30 04:41:21.541 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 04:41:21.552996 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 04:41:21.563742 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 04:41:21.576531 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 04:41:21.576665 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 04:41:21.589428 containerd[1811]: time="2025-04-30T04:41:21.589339299Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 04:41:21.599544 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 04:41:21.602168 containerd[1811]: time="2025-04-30T04:41:21.602149947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 04:41:21.602832 containerd[1811]: time="2025-04-30T04:41:21.602816574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 04:41:21.602858 containerd[1811]: time="2025-04-30T04:41:21.602833118Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 04:41:21.602858 containerd[1811]: time="2025-04-30T04:41:21.602842635Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 04:41:21.602938 containerd[1811]: time="2025-04-30T04:41:21.602929837Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 04:41:21.602956 containerd[1811]: time="2025-04-30T04:41:21.602941494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 04:41:21.602982 containerd[1811]: time="2025-04-30T04:41:21.602974285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 04:41:21.602999 containerd[1811]: time="2025-04-30T04:41:21.602982742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603084 containerd[1811]: time="2025-04-30T04:41:21.603075382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603100 containerd[1811]: time="2025-04-30T04:41:21.603084898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603100 containerd[1811]: time="2025-04-30T04:41:21.603093150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603137 containerd[1811]: time="2025-04-30T04:41:21.603099002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603156 containerd[1811]: time="2025-04-30T04:41:21.603147643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603273 containerd[1811]: time="2025-04-30T04:41:21.603265028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603336 containerd[1811]: time="2025-04-30T04:41:21.603327390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 04:41:21.603363 containerd[1811]: time="2025-04-30T04:41:21.603336375Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 04:41:21.603389 containerd[1811]: time="2025-04-30T04:41:21.603382197Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 04:41:21.603415 containerd[1811]: time="2025-04-30T04:41:21.603408821Z" level=info msg="metadata content store policy set" policy=shared Apr 30 04:41:21.609687 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 04:41:21.613428 containerd[1811]: time="2025-04-30T04:41:21.613414346Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 04:41:21.613453 containerd[1811]: time="2025-04-30T04:41:21.613440932Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 04:41:21.613484 containerd[1811]: time="2025-04-30T04:41:21.613453137Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 04:41:21.613484 containerd[1811]: time="2025-04-30T04:41:21.613462633Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 04:41:21.613484 containerd[1811]: time="2025-04-30T04:41:21.613470862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 04:41:21.613568 containerd[1811]: time="2025-04-30T04:41:21.613543804Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 04:41:21.613715 containerd[1811]: time="2025-04-30T04:41:21.613676599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 04:41:21.613761 containerd[1811]: time="2025-04-30T04:41:21.613752009Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 04:41:21.613791 containerd[1811]: time="2025-04-30T04:41:21.613763011Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 04:41:21.613791 containerd[1811]: time="2025-04-30T04:41:21.613770952Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 04:41:21.613791 containerd[1811]: time="2025-04-30T04:41:21.613778889Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613791 containerd[1811]: time="2025-04-30T04:41:21.613786254Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613793581Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613805620Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613813613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613821154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613828081Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613834112Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613845431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613857 containerd[1811]: time="2025-04-30T04:41:21.613853042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613862362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613869846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613877128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613884337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613890791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613897513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613904248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613912079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613920906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613927984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613934404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613942655Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613954046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613961279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.613979 containerd[1811]: time="2025-04-30T04:41:21.613967636Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.613991061Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.614000189Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.614006349Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.614012918Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.614018392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.614027494Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.614033695Z" level=info msg="NRI interface is disabled by configuration." Apr 30 04:41:21.614195 containerd[1811]: time="2025-04-30T04:41:21.614039452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 04:41:21.614329 containerd[1811]: time="2025-04-30T04:41:21.614194234Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 04:41:21.614329 containerd[1811]: time="2025-04-30T04:41:21.614229159Z" level=info msg="Connect containerd service" Apr 30 04:41:21.614329 containerd[1811]: time="2025-04-30T04:41:21.614246721Z" level=info msg="using legacy CRI server" Apr 30 04:41:21.614329 containerd[1811]: time="2025-04-30T04:41:21.614250837Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 04:41:21.614329 containerd[1811]: time="2025-04-30T04:41:21.614303159Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 04:41:21.614615 containerd[1811]: time="2025-04-30T04:41:21.614603923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 04:41:21.614741 containerd[1811]: time="2025-04-30T04:41:21.614721551Z" level=info msg="Start subscribing containerd event" Apr 30 04:41:21.614767 containerd[1811]: time="2025-04-30T04:41:21.614757368Z" level=info msg="Start recovering state" Apr 30 04:41:21.614767 containerd[1811]: time="2025-04-30T04:41:21.614761293Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 04:41:21.614796 containerd[1811]: time="2025-04-30T04:41:21.614790995Z" level=info msg="Start event monitor" Apr 30 04:41:21.614811 containerd[1811]: time="2025-04-30T04:41:21.614800227Z" level=info msg="Start snapshots syncer" Apr 30 04:41:21.614834 containerd[1811]: time="2025-04-30T04:41:21.614807001Z" level=info msg="Start cni network conf syncer for default" Apr 30 04:41:21.614834 containerd[1811]: time="2025-04-30T04:41:21.614816968Z" level=info msg="Start streaming server" Apr 30 04:41:21.614834 containerd[1811]: time="2025-04-30T04:41:21.614799530Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 04:41:21.614875 containerd[1811]: time="2025-04-30T04:41:21.614862237Z" level=info msg="containerd successfully booted in 0.025998s" Apr 30 04:41:21.620566 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 04:41:21.648512 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 04:41:21.664475 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Apr 30 04:41:21.679490 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 04:41:21.688262 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Apr 30 04:41:21.712277 extend-filesystems[1789]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Apr 30 04:41:21.712277 extend-filesystems[1789]: old_desc_blocks = 1, new_desc_blocks = 56 Apr 30 04:41:21.712277 extend-filesystems[1789]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Apr 30 04:41:21.733401 extend-filesystems[1781]: Resized filesystem in /dev/sdb9 Apr 30 04:41:21.760347 tar[1809]: linux-amd64/LICENSE Apr 30 04:41:21.760347 tar[1809]: linux-amd64/README.md Apr 30 04:41:21.712760 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 04:41:21.712855 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 04:41:21.768286 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 04:41:22.537321 systemd-networkd[1607]: bond0: Gained IPv6LL Apr 30 04:41:22.539118 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 04:41:22.551811 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 04:41:22.572465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:22.582980 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 04:41:22.601749 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 04:41:23.235838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:23.246909 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 04:41:23.396950 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Apr 30 04:41:23.397086 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Apr 30 04:41:23.503302 kernel: mlx5_core 0000:01:00.0: lag map: port 1:2 port 2:2 Apr 30 04:41:23.522326 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Apr 30 04:41:23.734612 kubelet[1910]: E0430 04:41:23.734533 1910 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 04:41:23.736085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 04:41:23.736161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 04:41:24.151540 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 04:41:24.174566 systemd[1]: Started sshd@0-147.75.90.169:22-139.178.68.195:40338.service - OpenSSH per-connection server daemon (139.178.68.195:40338). Apr 30 04:41:24.216393 sshd[1931]: Accepted publickey for core from 139.178.68.195 port 40338 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:24.217411 sshd[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:24.223129 systemd-logind[1801]: New session 1 of user core. Apr 30 04:41:24.224036 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 04:41:24.253696 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 04:41:24.266287 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 04:41:24.287590 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 04:41:24.299088 (systemd)[1935]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 04:41:24.381086 systemd[1935]: Queued start job for default target default.target. Apr 30 04:41:24.391913 systemd[1935]: Created slice app.slice - User Application Slice. Apr 30 04:41:24.391926 systemd[1935]: Reached target paths.target - Paths. Apr 30 04:41:24.391934 systemd[1935]: Reached target timers.target - Timers. Apr 30 04:41:24.392571 systemd[1935]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 04:41:24.397843 systemd[1935]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 04:41:24.397872 systemd[1935]: Reached target sockets.target - Sockets. Apr 30 04:41:24.397881 systemd[1935]: Reached target basic.target - Basic System. Apr 30 04:41:24.397902 systemd[1935]: Reached target default.target - Main User Target. Apr 30 04:41:24.397917 systemd[1935]: Startup finished in 93ms. Apr 30 04:41:24.398014 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 04:41:24.410280 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 04:41:24.475504 systemd[1]: Started sshd@1-147.75.90.169:22-139.178.68.195:40354.service - OpenSSH per-connection server daemon (139.178.68.195:40354). Apr 30 04:41:24.513128 sshd[1946]: Accepted publickey for core from 139.178.68.195 port 40354 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:24.513798 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:24.516170 systemd-logind[1801]: New session 2 of user core. Apr 30 04:41:24.523423 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 04:41:24.583443 sshd[1946]: pam_unix(sshd:session): session closed for user core Apr 30 04:41:24.599537 systemd[1]: sshd@1-147.75.90.169:22-139.178.68.195:40354.service: Deactivated successfully. Apr 30 04:41:24.600216 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 04:41:24.600923 systemd-logind[1801]: Session 2 logged out. Waiting for processes to exit. Apr 30 04:41:24.601535 systemd[1]: Started sshd@2-147.75.90.169:22-139.178.68.195:40360.service - OpenSSH per-connection server daemon (139.178.68.195:40360). Apr 30 04:41:24.614235 systemd-logind[1801]: Removed session 2. Apr 30 04:41:24.643268 sshd[1953]: Accepted publickey for core from 139.178.68.195 port 40360 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:24.644621 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:24.649806 systemd-logind[1801]: New session 3 of user core. Apr 30 04:41:24.672930 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 04:41:24.744106 sshd[1953]: pam_unix(sshd:session): session closed for user core Apr 30 04:41:24.745491 systemd[1]: sshd@2-147.75.90.169:22-139.178.68.195:40360.service: Deactivated successfully. Apr 30 04:41:24.746327 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 04:41:24.747027 systemd-logind[1801]: Session 3 logged out. Waiting for processes to exit. Apr 30 04:41:24.747704 systemd-logind[1801]: Removed session 3. Apr 30 04:41:25.804681 systemd[1]: Started sshd@3-147.75.90.169:22-218.92.0.158:44619.service - OpenSSH per-connection server daemon (218.92.0.158:44619). Apr 30 04:41:26.297731 systemd-timesyncd[1770]: Contacted time server 104.152.220.5:123 (0.flatcar.pool.ntp.org). Apr 30 04:41:26.297875 systemd-timesyncd[1770]: Initial clock synchronization to Wed 2025-04-30 04:41:26.430965 UTC. Apr 30 04:41:26.754061 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 04:41:26.757985 login[1888]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 04:41:26.766526 systemd-logind[1801]: New session 5 of user core. Apr 30 04:41:26.783850 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 04:41:26.790582 systemd-logind[1801]: New session 4 of user core. Apr 30 04:41:26.794328 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 04:41:26.807100 sshd[1962]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:41:27.127888 coreos-metadata[1775]: Apr 30 04:41:27.127 INFO Fetch successful Apr 30 04:41:27.141790 coreos-metadata[1847]: Apr 30 04:41:27.141 INFO Fetch successful Apr 30 04:41:27.215505 unknown[1847]: wrote ssh authorized keys file for user: core Apr 30 04:41:27.223612 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 04:41:27.224722 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Apr 30 04:41:27.230805 update-ssh-keys[1990]: Updated "/home/core/.ssh/authorized_keys" Apr 30 04:41:27.231338 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 04:41:27.232170 systemd[1]: Finished sshkeys.service. Apr 30 04:41:27.554044 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Apr 30 04:41:27.555464 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 04:41:27.555983 systemd[1]: Startup finished in 2.680s (kernel) + 23.767s (initrd) + 11.926s (userspace) = 38.374s. Apr 30 04:41:28.381474 sshd[1960]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:41:28.660101 sshd[2000]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:41:30.511916 sshd[1960]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:41:30.789488 sshd[2001]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:41:33.246594 sshd[1960]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:41:33.391527 sshd[1960]: Received disconnect from 218.92.0.158 port 44619:11: [preauth] Apr 30 04:41:33.391527 sshd[1960]: Disconnected from authenticating user root 218.92.0.158 port 44619 [preauth] Apr 30 04:41:33.395193 systemd[1]: sshd@3-147.75.90.169:22-218.92.0.158:44619.service: Deactivated successfully. Apr 30 04:41:33.941119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 04:41:33.953713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:34.158851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:34.160961 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 04:41:34.188388 kubelet[2012]: E0430 04:41:34.188350 2012 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 04:41:34.190758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 04:41:34.190843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 04:41:34.856555 systemd[1]: Started sshd@4-147.75.90.169:22-139.178.68.195:60870.service - OpenSSH per-connection server daemon (139.178.68.195:60870). Apr 30 04:41:34.884160 sshd[2030]: Accepted publickey for core from 139.178.68.195 port 60870 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:34.884784 sshd[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:34.887226 systemd-logind[1801]: New session 6 of user core. Apr 30 04:41:34.897512 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 04:41:34.947929 sshd[2030]: pam_unix(sshd:session): session closed for user core Apr 30 04:41:34.963902 systemd[1]: sshd@4-147.75.90.169:22-139.178.68.195:60870.service: Deactivated successfully. Apr 30 04:41:34.964668 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 04:41:34.965387 systemd-logind[1801]: Session 6 logged out. Waiting for processes to exit. Apr 30 04:41:34.966056 systemd[1]: Started sshd@5-147.75.90.169:22-139.178.68.195:59160.service - OpenSSH per-connection server daemon (139.178.68.195:59160). Apr 30 04:41:34.966658 systemd-logind[1801]: Removed session 6. Apr 30 04:41:35.015413 sshd[2037]: Accepted publickey for core from 139.178.68.195 port 59160 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:35.016418 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:35.020031 systemd-logind[1801]: New session 7 of user core. Apr 30 04:41:35.034492 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 04:41:35.087353 sshd[2037]: pam_unix(sshd:session): session closed for user core Apr 30 04:41:35.097833 systemd[1]: sshd@5-147.75.90.169:22-139.178.68.195:59160.service: Deactivated successfully. Apr 30 04:41:35.098551 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 04:41:35.099258 systemd-logind[1801]: Session 7 logged out. Waiting for processes to exit. Apr 30 04:41:35.099911 systemd[1]: Started sshd@6-147.75.90.169:22-139.178.68.195:59170.service - OpenSSH per-connection server daemon (139.178.68.195:59170). Apr 30 04:41:35.100482 systemd-logind[1801]: Removed session 7. Apr 30 04:41:35.140184 sshd[2044]: Accepted publickey for core from 139.178.68.195 port 59170 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:35.140975 sshd[2044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:35.144034 systemd-logind[1801]: New session 8 of user core. Apr 30 04:41:35.160563 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 04:41:35.215264 sshd[2044]: pam_unix(sshd:session): session closed for user core Apr 30 04:41:35.221768 systemd[1]: sshd@6-147.75.90.169:22-139.178.68.195:59170.service: Deactivated successfully. Apr 30 04:41:35.222522 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 04:41:35.223195 systemd-logind[1801]: Session 8 logged out. Waiting for processes to exit. Apr 30 04:41:35.223877 systemd[1]: Started sshd@7-147.75.90.169:22-139.178.68.195:59186.service - OpenSSH per-connection server daemon (139.178.68.195:59186). Apr 30 04:41:35.224343 systemd-logind[1801]: Removed session 8. Apr 30 04:41:35.272426 sshd[2051]: Accepted publickey for core from 139.178.68.195 port 59186 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:35.273406 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:35.276920 systemd-logind[1801]: New session 9 of user core. Apr 30 04:41:35.290525 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 04:41:35.356346 sudo[2054]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 04:41:35.356492 sudo[2054]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 04:41:35.377416 sudo[2054]: pam_unix(sudo:session): session closed for user root Apr 30 04:41:35.378209 sshd[2051]: pam_unix(sshd:session): session closed for user core Apr 30 04:41:35.395032 systemd[1]: sshd@7-147.75.90.169:22-139.178.68.195:59186.service: Deactivated successfully. Apr 30 04:41:35.395884 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 04:41:35.396717 systemd-logind[1801]: Session 9 logged out. Waiting for processes to exit. Apr 30 04:41:35.397433 systemd[1]: Started sshd@8-147.75.90.169:22-139.178.68.195:59192.service - OpenSSH per-connection server daemon (139.178.68.195:59192). Apr 30 04:41:35.398091 systemd-logind[1801]: Removed session 9. Apr 30 04:41:35.439621 sshd[2059]: Accepted publickey for core from 139.178.68.195 port 59192 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:35.440586 sshd[2059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:35.444042 systemd-logind[1801]: New session 10 of user core. Apr 30 04:41:35.457487 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 04:41:35.521077 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 04:41:35.521919 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 04:41:35.530692 sudo[2063]: pam_unix(sudo:session): session closed for user root Apr 30 04:41:35.544794 sudo[2062]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 04:41:35.545633 sudo[2062]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 04:41:35.571750 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 04:41:35.573021 auditctl[2066]: No rules Apr 30 04:41:35.573217 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 04:41:35.573350 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 04:41:35.574778 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 04:41:35.593195 augenrules[2084]: No rules Apr 30 04:41:35.593744 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 04:41:35.594560 sudo[2062]: pam_unix(sudo:session): session closed for user root Apr 30 04:41:35.595843 sshd[2059]: pam_unix(sshd:session): session closed for user core Apr 30 04:41:35.614692 systemd[1]: sshd@8-147.75.90.169:22-139.178.68.195:59192.service: Deactivated successfully. Apr 30 04:41:35.616741 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 04:41:35.618787 systemd-logind[1801]: Session 10 logged out. Waiting for processes to exit. Apr 30 04:41:35.637012 systemd[1]: Started sshd@9-147.75.90.169:22-139.178.68.195:59198.service - OpenSSH per-connection server daemon (139.178.68.195:59198). Apr 30 04:41:35.639605 systemd-logind[1801]: Removed session 10. Apr 30 04:41:35.726281 sshd[2092]: Accepted publickey for core from 139.178.68.195 port 59198 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:41:35.727600 sshd[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:41:35.731759 systemd-logind[1801]: New session 11 of user core. Apr 30 04:41:35.751635 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 04:41:35.816100 sudo[2095]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 04:41:35.816990 sudo[2095]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 04:41:36.103641 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 04:41:36.103698 (dockerd)[2119]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 04:41:36.354703 dockerd[2119]: time="2025-04-30T04:41:36.354611698Z" level=info msg="Starting up" Apr 30 04:41:36.428007 dockerd[2119]: time="2025-04-30T04:41:36.427959292Z" level=info msg="Loading containers: start." Apr 30 04:41:36.499299 kernel: Initializing XFRM netlink socket Apr 30 04:41:36.563905 systemd-networkd[1607]: docker0: Link UP Apr 30 04:41:36.579309 dockerd[2119]: time="2025-04-30T04:41:36.579213820Z" level=info msg="Loading containers: done." Apr 30 04:41:36.589212 dockerd[2119]: time="2025-04-30T04:41:36.589162913Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 04:41:36.589212 dockerd[2119]: time="2025-04-30T04:41:36.589211338Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 04:41:36.589311 dockerd[2119]: time="2025-04-30T04:41:36.589272305Z" level=info msg="Daemon has completed initialization" Apr 30 04:41:36.589462 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3719063263-merged.mount: Deactivated successfully. Apr 30 04:41:36.604279 dockerd[2119]: time="2025-04-30T04:41:36.604212918Z" level=info msg="API listen on /run/docker.sock" Apr 30 04:41:36.604253 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 04:41:37.503225 containerd[1811]: time="2025-04-30T04:41:37.503203870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 04:41:38.276767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039615386.mount: Deactivated successfully. Apr 30 04:41:39.861021 containerd[1811]: time="2025-04-30T04:41:39.860965998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:39.861233 containerd[1811]: time="2025-04-30T04:41:39.861129509Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 04:41:39.861555 containerd[1811]: time="2025-04-30T04:41:39.861529778Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:39.863338 containerd[1811]: time="2025-04-30T04:41:39.863302496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:39.864029 containerd[1811]: time="2025-04-30T04:41:39.863977588Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.360748064s" Apr 30 04:41:39.864029 containerd[1811]: time="2025-04-30T04:41:39.864001207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 04:41:39.875316 containerd[1811]: time="2025-04-30T04:41:39.875266215Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 04:41:41.909043 containerd[1811]: time="2025-04-30T04:41:41.909015738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:41.909318 containerd[1811]: time="2025-04-30T04:41:41.909248562Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 04:41:41.909731 containerd[1811]: time="2025-04-30T04:41:41.909714533Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:41.911730 containerd[1811]: time="2025-04-30T04:41:41.911713006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:41.912211 containerd[1811]: time="2025-04-30T04:41:41.912197177Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.036911735s" Apr 30 04:41:41.912239 containerd[1811]: time="2025-04-30T04:41:41.912214399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 04:41:41.923365 containerd[1811]: time="2025-04-30T04:41:41.923315222Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 04:41:43.259705 containerd[1811]: time="2025-04-30T04:41:43.259650792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:43.259914 containerd[1811]: time="2025-04-30T04:41:43.259877597Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 04:41:43.260234 containerd[1811]: time="2025-04-30T04:41:43.260195419Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:43.261809 containerd[1811]: time="2025-04-30T04:41:43.261769142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:43.262457 containerd[1811]: time="2025-04-30T04:41:43.262415483Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.339080635s" Apr 30 04:41:43.262457 containerd[1811]: time="2025-04-30T04:41:43.262431482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 04:41:43.273729 containerd[1811]: time="2025-04-30T04:41:43.273710236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 04:41:44.036699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255319442.mount: Deactivated successfully. Apr 30 04:41:44.336972 containerd[1811]: time="2025-04-30T04:41:44.336911438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:44.337163 containerd[1811]: time="2025-04-30T04:41:44.337104697Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 04:41:44.337401 containerd[1811]: time="2025-04-30T04:41:44.337360823Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:44.338750 containerd[1811]: time="2025-04-30T04:41:44.338709173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:44.339022 containerd[1811]: time="2025-04-30T04:41:44.338981768Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.065251589s" Apr 30 04:41:44.339022 containerd[1811]: time="2025-04-30T04:41:44.338996767Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 04:41:44.349661 containerd[1811]: time="2025-04-30T04:41:44.349611737Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 04:41:44.436923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 04:41:44.455484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:44.684444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:44.695188 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 04:41:44.727883 kubelet[2430]: E0430 04:41:44.727859 2430 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 04:41:44.729217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 04:41:44.729339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 04:41:44.861197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253760605.mount: Deactivated successfully. Apr 30 04:41:45.409623 containerd[1811]: time="2025-04-30T04:41:45.409566604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:45.409833 containerd[1811]: time="2025-04-30T04:41:45.409754849Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 04:41:45.410271 containerd[1811]: time="2025-04-30T04:41:45.410233322Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:45.428324 containerd[1811]: time="2025-04-30T04:41:45.428252148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:45.428977 containerd[1811]: time="2025-04-30T04:41:45.428929700Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.079295813s" Apr 30 04:41:45.428977 containerd[1811]: time="2025-04-30T04:41:45.428950043Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 04:41:45.440476 containerd[1811]: time="2025-04-30T04:41:45.440456190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 04:41:45.909202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937612550.mount: Deactivated successfully. Apr 30 04:41:45.910463 containerd[1811]: time="2025-04-30T04:41:45.910444973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:45.910743 containerd[1811]: time="2025-04-30T04:41:45.910701332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 04:41:45.911264 containerd[1811]: time="2025-04-30T04:41:45.911225931Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:45.912300 containerd[1811]: time="2025-04-30T04:41:45.912263396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:45.912824 containerd[1811]: time="2025-04-30T04:41:45.912783838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 472.305383ms" Apr 30 04:41:45.912824 containerd[1811]: time="2025-04-30T04:41:45.912798927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 04:41:45.924312 containerd[1811]: time="2025-04-30T04:41:45.924292462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 04:41:46.493276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050295658.mount: Deactivated successfully. Apr 30 04:41:48.366884 containerd[1811]: time="2025-04-30T04:41:48.366830219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:48.367085 containerd[1811]: time="2025-04-30T04:41:48.367041126Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 04:41:48.367481 containerd[1811]: time="2025-04-30T04:41:48.367438590Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:48.369306 containerd[1811]: time="2025-04-30T04:41:48.369284878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:41:48.369906 containerd[1811]: time="2025-04-30T04:41:48.369864895Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.445552638s" Apr 30 04:41:48.369906 containerd[1811]: time="2025-04-30T04:41:48.369882208Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 04:41:50.013282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:50.035572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:50.044996 systemd[1]: Reloading requested from client PID 2720 ('systemctl') (unit session-11.scope)... Apr 30 04:41:50.045003 systemd[1]: Reloading... Apr 30 04:41:50.079314 zram_generator::config[2759]: No configuration found. Apr 30 04:41:50.148761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 04:41:50.209640 systemd[1]: Reloading finished in 164 ms. Apr 30 04:41:50.243618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:50.244883 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:50.245981 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 04:41:50.246081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:50.246935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:50.457896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:50.460317 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 04:41:50.482657 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 04:41:50.482657 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 04:41:50.482657 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 04:41:50.483563 kubelet[2828]: I0430 04:41:50.483517 2828 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 04:41:50.805390 kubelet[2828]: I0430 04:41:50.805337 2828 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 04:41:50.805390 kubelet[2828]: I0430 04:41:50.805354 2828 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 04:41:50.805589 kubelet[2828]: I0430 04:41:50.805538 2828 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 04:41:50.816099 kubelet[2828]: I0430 04:41:50.816057 2828 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 04:41:50.816867 kubelet[2828]: E0430 04:41:50.816860 2828 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.90.169:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.833577 kubelet[2828]: I0430 04:41:50.833538 2828 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 04:41:50.834736 kubelet[2828]: I0430 04:41:50.834688 2828 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 04:41:50.834831 kubelet[2828]: I0430 04:41:50.834708 2828 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-671b97f93d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 04:41:50.835390 kubelet[2828]: I0430 04:41:50.835353 2828 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 04:41:50.835390 kubelet[2828]: I0430 04:41:50.835362 2828 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 04:41:50.836253 kubelet[2828]: I0430 04:41:50.836213 2828 state_mem.go:36] "Initialized new in-memory state store" Apr 30 04:41:50.836915 kubelet[2828]: I0430 04:41:50.836875 2828 kubelet.go:400] "Attempting to sync node with API server" Apr 30 04:41:50.836915 kubelet[2828]: I0430 04:41:50.836883 2828 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 04:41:50.836915 kubelet[2828]: I0430 04:41:50.836895 2828 kubelet.go:312] "Adding apiserver pod source" Apr 30 04:41:50.836915 kubelet[2828]: I0430 04:41:50.836905 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 04:41:50.839677 kubelet[2828]: W0430 04:41:50.839624 2828 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.839677 kubelet[2828]: W0430 04:41:50.839632 2828 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-671b97f93d&limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.839677 kubelet[2828]: E0430 04:41:50.839660 2828 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.90.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-671b97f93d&limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.839677 kubelet[2828]: E0430 04:41:50.839652 2828 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.90.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.840561 kubelet[2828]: I0430 04:41:50.840494 2828 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 04:41:50.841708 kubelet[2828]: I0430 04:41:50.841659 2828 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 04:41:50.841708 kubelet[2828]: W0430 04:41:50.841686 2828 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 04:41:50.841978 kubelet[2828]: I0430 04:41:50.841951 2828 server.go:1264] "Started kubelet" Apr 30 04:41:50.842085 kubelet[2828]: I0430 04:41:50.842025 2828 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 04:41:50.842137 kubelet[2828]: I0430 04:41:50.842109 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 04:41:50.842340 kubelet[2828]: I0430 04:41:50.842308 2828 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 04:41:50.842876 kubelet[2828]: I0430 04:41:50.842832 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 04:41:50.842914 kubelet[2828]: I0430 04:41:50.842882 2828 server.go:455] "Adding debug handlers to kubelet server" Apr 30 04:41:50.842914 kubelet[2828]: I0430 04:41:50.842882 2828 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 04:41:50.842914 kubelet[2828]: I0430 04:41:50.842909 2828 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 04:41:50.843000 kubelet[2828]: I0430 04:41:50.842939 2828 reconciler.go:26] "Reconciler: start to sync state" Apr 30 04:41:50.843086 kubelet[2828]: W0430 04:41:50.843066 2828 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.843109 kubelet[2828]: E0430 04:41:50.843091 2828 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.843174 kubelet[2828]: E0430 04:41:50.843152 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-671b97f93d?timeout=10s\": dial tcp 147.75.90.169:6443: connect: connection refused" interval="200ms" Apr 30 04:41:50.843429 kubelet[2828]: I0430 04:41:50.843416 2828 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 04:41:50.843846 kubelet[2828]: I0430 04:41:50.843838 2828 factory.go:221] Registration of the containerd container factory successfully Apr 30 04:41:50.843846 kubelet[2828]: I0430 04:41:50.843847 2828 factory.go:221] Registration of the systemd container factory successfully Apr 30 04:41:50.861565 kubelet[2828]: E0430 04:41:50.861550 2828 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-671b97f93d\" not found" Apr 30 04:41:50.864946 kubelet[2828]: E0430 04:41:50.864929 2828 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 04:41:50.865802 kubelet[2828]: E0430 04:41:50.865740 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.169:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.169:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-671b97f93d.183afeefa8ca06b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-671b97f93d,UID:ci-4081.3.3-a-671b97f93d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-671b97f93d,},FirstTimestamp:2025-04-30 04:41:50.841939639 +0000 UTC m=+0.379754592,LastTimestamp:2025-04-30 04:41:50.841939639 +0000 UTC m=+0.379754592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-671b97f93d,}" Apr 30 04:41:50.870561 kubelet[2828]: I0430 04:41:50.870517 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 04:41:50.871135 kubelet[2828]: I0430 04:41:50.871128 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 04:41:50.871162 kubelet[2828]: I0430 04:41:50.871145 2828 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 04:41:50.871162 kubelet[2828]: I0430 04:41:50.871155 2828 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 04:41:50.871192 kubelet[2828]: E0430 04:41:50.871177 2828 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 04:41:50.871579 kubelet[2828]: W0430 04:41:50.871560 2828 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.871611 kubelet[2828]: E0430 04:41:50.871581 2828 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:50.879634 kubelet[2828]: I0430 04:41:50.879583 2828 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 04:41:50.879634 kubelet[2828]: I0430 04:41:50.879592 2828 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 04:41:50.879634 kubelet[2828]: I0430 04:41:50.879602 2828 state_mem.go:36] "Initialized new in-memory state store" Apr 30 04:41:50.880656 kubelet[2828]: I0430 04:41:50.880619 2828 policy_none.go:49] "None policy: Start" Apr 30 04:41:50.880895 kubelet[2828]: I0430 04:41:50.880858 2828 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 04:41:50.880895 kubelet[2828]: I0430 04:41:50.880876 2828 state_mem.go:35] "Initializing new in-memory state store" Apr 30 04:41:50.883580 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 04:41:50.905158 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 04:41:50.907151 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 04:41:50.919043 kubelet[2828]: I0430 04:41:50.919004 2828 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 04:41:50.919194 kubelet[2828]: I0430 04:41:50.919139 2828 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 04:41:50.919273 kubelet[2828]: I0430 04:41:50.919236 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 04:41:50.919961 kubelet[2828]: E0430 04:41:50.919938 2828 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-671b97f93d\" not found" Apr 30 04:41:50.966608 kubelet[2828]: I0430 04:41:50.966495 2828 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:50.967309 kubelet[2828]: E0430 04:41:50.967202 2828 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.169:6443/api/v1/nodes\": dial tcp 147.75.90.169:6443: connect: connection refused" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:50.971510 kubelet[2828]: I0430 04:41:50.971441 2828 topology_manager.go:215] "Topology Admit Handler" podUID="f77a8c6492eb4d97a5bccc57aff05e1f" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:50.975115 kubelet[2828]: I0430 04:41:50.975022 2828 topology_manager.go:215] "Topology Admit Handler" podUID="698541c5b3f7b5c06a0fa0946e12e4d7" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:50.978819 kubelet[2828]: I0430 04:41:50.978727 2828 topology_manager.go:215] "Topology Admit Handler" podUID="51fbcde30d1692784c0a6a14c12f8aed" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:50.992339 systemd[1]: Created slice kubepods-burstable-podf77a8c6492eb4d97a5bccc57aff05e1f.slice - libcontainer container kubepods-burstable-podf77a8c6492eb4d97a5bccc57aff05e1f.slice. Apr 30 04:41:51.028937 systemd[1]: Created slice kubepods-burstable-pod698541c5b3f7b5c06a0fa0946e12e4d7.slice - libcontainer container kubepods-burstable-pod698541c5b3f7b5c06a0fa0946e12e4d7.slice. Apr 30 04:41:51.044807 kubelet[2828]: E0430 04:41:51.044697 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-671b97f93d?timeout=10s\": dial tcp 147.75.90.169:6443: connect: connection refused" interval="400ms" Apr 30 04:41:51.059055 systemd[1]: Created slice kubepods-burstable-pod51fbcde30d1692784c0a6a14c12f8aed.slice - libcontainer container kubepods-burstable-pod51fbcde30d1692784c0a6a14c12f8aed.slice. Apr 30 04:41:51.144227 kubelet[2828]: I0430 04:41:51.144112 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.144227 kubelet[2828]: I0430 04:41:51.144215 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f77a8c6492eb4d97a5bccc57aff05e1f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" (UID: \"f77a8c6492eb4d97a5bccc57aff05e1f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.144611 kubelet[2828]: I0430 04:41:51.144320 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.144611 kubelet[2828]: I0430 04:41:51.144377 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.144611 kubelet[2828]: I0430 04:41:51.144434 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.144611 kubelet[2828]: I0430 04:41:51.144485 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.144611 kubelet[2828]: I0430 04:41:51.144538 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51fbcde30d1692784c0a6a14c12f8aed-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-671b97f93d\" (UID: \"51fbcde30d1692784c0a6a14c12f8aed\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.145064 kubelet[2828]: I0430 04:41:51.144584 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f77a8c6492eb4d97a5bccc57aff05e1f-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" (UID: \"f77a8c6492eb4d97a5bccc57aff05e1f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.145064 kubelet[2828]: I0430 04:41:51.144629 2828 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f77a8c6492eb4d97a5bccc57aff05e1f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" (UID: \"f77a8c6492eb4d97a5bccc57aff05e1f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.171749 kubelet[2828]: I0430 04:41:51.171654 2828 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.172512 kubelet[2828]: E0430 04:41:51.172399 2828 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.169:6443/api/v1/nodes\": dial tcp 147.75.90.169:6443: connect: connection refused" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.323568 containerd[1811]: time="2025-04-30T04:41:51.323329987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-671b97f93d,Uid:f77a8c6492eb4d97a5bccc57aff05e1f,Namespace:kube-system,Attempt:0,}" Apr 30 04:41:51.353611 containerd[1811]: time="2025-04-30T04:41:51.353550980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-671b97f93d,Uid:698541c5b3f7b5c06a0fa0946e12e4d7,Namespace:kube-system,Attempt:0,}" Apr 30 04:41:51.364962 containerd[1811]: time="2025-04-30T04:41:51.364906761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-671b97f93d,Uid:51fbcde30d1692784c0a6a14c12f8aed,Namespace:kube-system,Attempt:0,}" Apr 30 04:41:51.446638 kubelet[2828]: E0430 04:41:51.446516 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-671b97f93d?timeout=10s\": dial tcp 147.75.90.169:6443: connect: connection refused" interval="800ms" Apr 30 04:41:51.577384 kubelet[2828]: I0430 04:41:51.577166 2828 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.578089 kubelet[2828]: E0430 04:41:51.577843 2828 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.169:6443/api/v1/nodes\": dial tcp 147.75.90.169:6443: connect: connection refused" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:51.693976 kubelet[2828]: W0430 04:41:51.693819 2828 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:51.693976 kubelet[2828]: E0430 04:41:51.693954 2828 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.90.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.169:6443: connect: connection refused Apr 30 04:41:51.784916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002396067.mount: Deactivated successfully. Apr 30 04:41:51.786102 containerd[1811]: time="2025-04-30T04:41:51.786086005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 04:41:51.786306 containerd[1811]: time="2025-04-30T04:41:51.786255557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 04:41:51.787156 containerd[1811]: time="2025-04-30T04:41:51.787129433Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 04:41:51.787623 containerd[1811]: time="2025-04-30T04:41:51.787586362Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 04:41:51.787736 containerd[1811]: time="2025-04-30T04:41:51.787683986Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 04:41:51.788729 containerd[1811]: time="2025-04-30T04:41:51.788690062Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 04:41:51.788914 containerd[1811]: time="2025-04-30T04:41:51.788860258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 04:41:51.790717 containerd[1811]: time="2025-04-30T04:41:51.790675109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 04:41:51.791224 containerd[1811]: time="2025-04-30T04:41:51.791182209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.707736ms" Apr 30 04:41:51.791985 containerd[1811]: time="2025-04-30T04:41:51.791943366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 438.336751ms" Apr 30 04:41:51.794045 containerd[1811]: time="2025-04-30T04:41:51.794033773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 429.10252ms" Apr 30 04:41:51.888629 containerd[1811]: time="2025-04-30T04:41:51.888529381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:41:51.888715 containerd[1811]: time="2025-04-30T04:41:51.888619330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:41:51.888715 containerd[1811]: time="2025-04-30T04:41:51.888669565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:41:51.888715 containerd[1811]: time="2025-04-30T04:41:51.888685489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:41:51.888715 containerd[1811]: time="2025-04-30T04:41:51.888669887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:41:51.888715 containerd[1811]: time="2025-04-30T04:41:51.888704367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:41:51.888843 containerd[1811]: time="2025-04-30T04:41:51.888721547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:41:51.888843 containerd[1811]: time="2025-04-30T04:41:51.888759782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:41:51.888843 containerd[1811]: time="2025-04-30T04:41:51.888788993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:41:51.888943 containerd[1811]: time="2025-04-30T04:41:51.888739640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:41:51.888962 containerd[1811]: time="2025-04-30T04:41:51.888941144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:41:51.889010 containerd[1811]: time="2025-04-30T04:41:51.888990811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:41:51.909535 systemd[1]: Started cri-containerd-16945e4552f428770269a712451e54a87f220f9ca6d7c29f5b58aa3ba53f4d95.scope - libcontainer container 16945e4552f428770269a712451e54a87f220f9ca6d7c29f5b58aa3ba53f4d95. Apr 30 04:41:51.910462 systemd[1]: Started cri-containerd-5a6829ab45fc3e4540b2b0014d667dc5d3058f9e9b414614e14031f2be03d9dd.scope - libcontainer container 5a6829ab45fc3e4540b2b0014d667dc5d3058f9e9b414614e14031f2be03d9dd. Apr 30 04:41:51.911386 systemd[1]: Started cri-containerd-978a4c75ceae925b84b0ee330dffec197c88862a58292463188d5e5d7e02f601.scope - libcontainer container 978a4c75ceae925b84b0ee330dffec197c88862a58292463188d5e5d7e02f601. Apr 30 04:41:51.939163 containerd[1811]: time="2025-04-30T04:41:51.939137352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-671b97f93d,Uid:51fbcde30d1692784c0a6a14c12f8aed,Namespace:kube-system,Attempt:0,} returns sandbox id \"16945e4552f428770269a712451e54a87f220f9ca6d7c29f5b58aa3ba53f4d95\"" Apr 30 04:41:51.940842 containerd[1811]: time="2025-04-30T04:41:51.940811201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-671b97f93d,Uid:698541c5b3f7b5c06a0fa0946e12e4d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a6829ab45fc3e4540b2b0014d667dc5d3058f9e9b414614e14031f2be03d9dd\"" Apr 30 04:41:51.942499 containerd[1811]: time="2025-04-30T04:41:51.942478795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-671b97f93d,Uid:f77a8c6492eb4d97a5bccc57aff05e1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"978a4c75ceae925b84b0ee330dffec197c88862a58292463188d5e5d7e02f601\"" Apr 30 04:41:51.942593 containerd[1811]: time="2025-04-30T04:41:51.942578351Z" level=info msg="CreateContainer within sandbox \"16945e4552f428770269a712451e54a87f220f9ca6d7c29f5b58aa3ba53f4d95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 04:41:51.942911 containerd[1811]: time="2025-04-30T04:41:51.942896332Z" level=info msg="CreateContainer within sandbox \"5a6829ab45fc3e4540b2b0014d667dc5d3058f9e9b414614e14031f2be03d9dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 04:41:51.943764 containerd[1811]: time="2025-04-30T04:41:51.943750942Z" level=info msg="CreateContainer within sandbox \"978a4c75ceae925b84b0ee330dffec197c88862a58292463188d5e5d7e02f601\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 04:41:51.948753 containerd[1811]: time="2025-04-30T04:41:51.948740034Z" level=info msg="CreateContainer within sandbox \"16945e4552f428770269a712451e54a87f220f9ca6d7c29f5b58aa3ba53f4d95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"acd728951349a23624175d6ce9e318c13f54e6d65566a7cfc8a8d3ffb166892a\"" Apr 30 04:41:51.948960 containerd[1811]: time="2025-04-30T04:41:51.948950647Z" level=info msg="StartContainer for \"acd728951349a23624175d6ce9e318c13f54e6d65566a7cfc8a8d3ffb166892a\"" Apr 30 04:41:51.950073 containerd[1811]: time="2025-04-30T04:41:51.950061106Z" level=info msg="CreateContainer within sandbox \"5a6829ab45fc3e4540b2b0014d667dc5d3058f9e9b414614e14031f2be03d9dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e6fb9737e0bcf46de4d12b66c51b31850fac7637c9b844277890b389bfeab0ae\"" Apr 30 04:41:51.950233 containerd[1811]: time="2025-04-30T04:41:51.950222231Z" level=info msg="StartContainer for \"e6fb9737e0bcf46de4d12b66c51b31850fac7637c9b844277890b389bfeab0ae\"" Apr 30 04:41:51.950937 containerd[1811]: time="2025-04-30T04:41:51.950924724Z" level=info msg="CreateContainer within sandbox \"978a4c75ceae925b84b0ee330dffec197c88862a58292463188d5e5d7e02f601\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"84d3ea453d7827f1fff53980826fb008683e4cfc6b2d7080f47318abfac7310a\"" Apr 30 04:41:51.951092 containerd[1811]: time="2025-04-30T04:41:51.951082720Z" level=info msg="StartContainer for \"84d3ea453d7827f1fff53980826fb008683e4cfc6b2d7080f47318abfac7310a\"" Apr 30 04:41:51.972607 systemd[1]: Started cri-containerd-acd728951349a23624175d6ce9e318c13f54e6d65566a7cfc8a8d3ffb166892a.scope - libcontainer container acd728951349a23624175d6ce9e318c13f54e6d65566a7cfc8a8d3ffb166892a. Apr 30 04:41:51.973386 systemd[1]: Started cri-containerd-e6fb9737e0bcf46de4d12b66c51b31850fac7637c9b844277890b389bfeab0ae.scope - libcontainer container e6fb9737e0bcf46de4d12b66c51b31850fac7637c9b844277890b389bfeab0ae. Apr 30 04:41:51.974938 systemd[1]: Started cri-containerd-84d3ea453d7827f1fff53980826fb008683e4cfc6b2d7080f47318abfac7310a.scope - libcontainer container 84d3ea453d7827f1fff53980826fb008683e4cfc6b2d7080f47318abfac7310a. Apr 30 04:41:51.996328 containerd[1811]: time="2025-04-30T04:41:51.996304961Z" level=info msg="StartContainer for \"acd728951349a23624175d6ce9e318c13f54e6d65566a7cfc8a8d3ffb166892a\" returns successfully" Apr 30 04:41:51.996416 containerd[1811]: time="2025-04-30T04:41:51.996365461Z" level=info msg="StartContainer for \"e6fb9737e0bcf46de4d12b66c51b31850fac7637c9b844277890b389bfeab0ae\" returns successfully" Apr 30 04:41:51.997325 containerd[1811]: time="2025-04-30T04:41:51.997310479Z" level=info msg="StartContainer for \"84d3ea453d7827f1fff53980826fb008683e4cfc6b2d7080f47318abfac7310a\" returns successfully" Apr 30 04:41:52.379215 kubelet[2828]: I0430 04:41:52.379196 2828 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:52.477503 kubelet[2828]: E0430 04:41:52.477324 2828 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-671b97f93d\" not found" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:52.576452 kubelet[2828]: I0430 04:41:52.576433 2828 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:52.838502 kubelet[2828]: I0430 04:41:52.838447 2828 apiserver.go:52] "Watching apiserver" Apr 30 04:41:52.843146 kubelet[2828]: I0430 04:41:52.843086 2828 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 04:41:52.898768 kubelet[2828]: E0430 04:41:52.898708 2828 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-a-671b97f93d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:52.898768 kubelet[2828]: E0430 04:41:52.898711 2828 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:52.899069 kubelet[2828]: E0430 04:41:52.898818 2828 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:53.886824 kubelet[2828]: W0430 04:41:53.886762 2828 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 04:41:54.898806 systemd[1]: Reloading requested from client PID 3146 ('systemctl') (unit session-11.scope)... Apr 30 04:41:54.898813 systemd[1]: Reloading... Apr 30 04:41:54.945391 zram_generator::config[3185]: No configuration found. Apr 30 04:41:55.009853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 04:41:55.077928 systemd[1]: Reloading finished in 178 ms. Apr 30 04:41:55.122635 kubelet[2828]: I0430 04:41:55.122567 2828 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 04:41:55.122580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:55.135870 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 04:41:55.135997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:55.161750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 04:41:55.369196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 04:41:55.371777 (kubelet)[3250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 04:41:55.394950 kubelet[3250]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 04:41:55.394950 kubelet[3250]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 04:41:55.394950 kubelet[3250]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 04:41:55.395159 kubelet[3250]: I0430 04:41:55.394978 3250 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 04:41:55.397567 kubelet[3250]: I0430 04:41:55.397518 3250 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 04:41:55.397567 kubelet[3250]: I0430 04:41:55.397528 3250 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 04:41:55.397816 kubelet[3250]: I0430 04:41:55.397782 3250 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 04:41:55.399526 kubelet[3250]: I0430 04:41:55.399489 3250 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 04:41:55.400236 kubelet[3250]: I0430 04:41:55.400207 3250 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 04:41:55.409258 kubelet[3250]: I0430 04:41:55.409242 3250 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 04:41:55.409400 kubelet[3250]: I0430 04:41:55.409357 3250 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 04:41:55.409496 kubelet[3250]: I0430 04:41:55.409372 3250 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-671b97f93d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 04:41:55.409496 kubelet[3250]: I0430 04:41:55.409476 3250 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 04:41:55.409496 kubelet[3250]: I0430 04:41:55.409482 3250 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 04:41:55.409597 kubelet[3250]: I0430 04:41:55.409508 3250 state_mem.go:36] "Initialized new in-memory state store" Apr 30 04:41:55.409597 kubelet[3250]: I0430 04:41:55.409558 3250 kubelet.go:400] "Attempting to sync node with API server" Apr 30 04:41:55.409597 kubelet[3250]: I0430 04:41:55.409565 3250 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 04:41:55.409597 kubelet[3250]: I0430 04:41:55.409577 3250 kubelet.go:312] "Adding apiserver pod source" Apr 30 04:41:55.409597 kubelet[3250]: I0430 04:41:55.409590 3250 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 04:41:55.409898 kubelet[3250]: I0430 04:41:55.409885 3250 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 04:41:55.410025 kubelet[3250]: I0430 04:41:55.410016 3250 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 04:41:55.410310 kubelet[3250]: I0430 04:41:55.410301 3250 server.go:1264] "Started kubelet" Apr 30 04:41:55.410357 kubelet[3250]: I0430 04:41:55.410308 3250 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 04:41:55.410393 kubelet[3250]: I0430 04:41:55.410346 3250 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 04:41:55.410762 kubelet[3250]: I0430 04:41:55.410739 3250 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 04:41:55.412207 kubelet[3250]: I0430 04:41:55.412158 3250 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 04:41:55.412207 kubelet[3250]: E0430 04:41:55.412186 3250 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 04:41:55.412316 kubelet[3250]: I0430 04:41:55.412216 3250 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 04:41:55.412316 kubelet[3250]: E0430 04:41:55.412219 3250 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-671b97f93d\" not found" Apr 30 04:41:55.412316 kubelet[3250]: I0430 04:41:55.412239 3250 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 04:41:55.412400 kubelet[3250]: I0430 04:41:55.412356 3250 reconciler.go:26] "Reconciler: start to sync state" Apr 30 04:41:55.412400 kubelet[3250]: I0430 04:41:55.412382 3250 server.go:455] "Adding debug handlers to kubelet server" Apr 30 04:41:55.412723 kubelet[3250]: I0430 04:41:55.412707 3250 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 04:41:55.413292 kubelet[3250]: I0430 04:41:55.413283 3250 factory.go:221] Registration of the containerd container factory successfully Apr 30 04:41:55.413292 kubelet[3250]: I0430 04:41:55.413292 3250 factory.go:221] Registration of the systemd container factory successfully Apr 30 04:41:55.417354 kubelet[3250]: I0430 04:41:55.417327 3250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 04:41:55.417900 kubelet[3250]: I0430 04:41:55.417868 3250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 04:41:55.417900 kubelet[3250]: I0430 04:41:55.417885 3250 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 04:41:55.417900 kubelet[3250]: I0430 04:41:55.417894 3250 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 04:41:55.417973 kubelet[3250]: E0430 04:41:55.417917 3250 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 04:41:55.427163 kubelet[3250]: I0430 04:41:55.427151 3250 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 04:41:55.427163 kubelet[3250]: I0430 04:41:55.427159 3250 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 04:41:55.427163 kubelet[3250]: I0430 04:41:55.427169 3250 state_mem.go:36] "Initialized new in-memory state store" Apr 30 04:41:55.427276 kubelet[3250]: I0430 04:41:55.427264 3250 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 04:41:55.427295 kubelet[3250]: I0430 04:41:55.427274 3250 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 04:41:55.427295 kubelet[3250]: I0430 04:41:55.427288 3250 policy_none.go:49] "None policy: Start" Apr 30 04:41:55.427658 kubelet[3250]: I0430 04:41:55.427623 3250 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 04:41:55.427658 kubelet[3250]: I0430 04:41:55.427633 3250 state_mem.go:35] "Initializing new in-memory state store" Apr 30 04:41:55.427712 kubelet[3250]: I0430 04:41:55.427701 3250 state_mem.go:75] "Updated machine memory state" Apr 30 04:41:55.429790 kubelet[3250]: I0430 04:41:55.429754 3250 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 04:41:55.429897 kubelet[3250]: I0430 04:41:55.429841 3250 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 04:41:55.429897 kubelet[3250]: I0430 04:41:55.429898 3250 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 04:41:55.518190 kubelet[3250]: I0430 04:41:55.518094 3250 topology_manager.go:215] "Topology Admit Handler" podUID="f77a8c6492eb4d97a5bccc57aff05e1f" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.518460 kubelet[3250]: I0430 04:41:55.518328 3250 topology_manager.go:215] "Topology Admit Handler" podUID="698541c5b3f7b5c06a0fa0946e12e4d7" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.518460 kubelet[3250]: I0430 04:41:55.518401 3250 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.518829 kubelet[3250]: I0430 04:41:55.518484 3250 topology_manager.go:215] "Topology Admit Handler" podUID="51fbcde30d1692784c0a6a14c12f8aed" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.524890 kubelet[3250]: W0430 04:41:55.524825 3250 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 04:41:55.525480 kubelet[3250]: W0430 04:41:55.525438 3250 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 04:41:55.526333 kubelet[3250]: W0430 04:41:55.526253 3250 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 04:41:55.526501 kubelet[3250]: E0430 04:41:55.526399 3250 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.528694 kubelet[3250]: I0430 04:41:55.528605 3250 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.528886 kubelet[3250]: I0430 04:41:55.528771 3250 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.714552 kubelet[3250]: I0430 04:41:55.714464 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f77a8c6492eb4d97a5bccc57aff05e1f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" (UID: \"f77a8c6492eb4d97a5bccc57aff05e1f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.714888 kubelet[3250]: I0430 04:41:55.714580 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.714888 kubelet[3250]: I0430 04:41:55.714692 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.714888 kubelet[3250]: I0430 04:41:55.714780 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.715460 kubelet[3250]: I0430 04:41:55.714875 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.715460 kubelet[3250]: I0430 04:41:55.714979 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51fbcde30d1692784c0a6a14c12f8aed-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-671b97f93d\" (UID: \"51fbcde30d1692784c0a6a14c12f8aed\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.715460 kubelet[3250]: I0430 04:41:55.715097 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f77a8c6492eb4d97a5bccc57aff05e1f-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" (UID: \"f77a8c6492eb4d97a5bccc57aff05e1f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.715460 kubelet[3250]: I0430 04:41:55.715187 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f77a8c6492eb4d97a5bccc57aff05e1f-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" (UID: \"f77a8c6492eb4d97a5bccc57aff05e1f\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:55.715460 kubelet[3250]: I0430 04:41:55.715291 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/698541c5b3f7b5c06a0fa0946e12e4d7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" (UID: \"698541c5b3f7b5c06a0fa0946e12e4d7\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:56.410199 kubelet[3250]: I0430 04:41:56.410104 3250 apiserver.go:52] "Watching apiserver" Apr 30 04:41:56.432472 kubelet[3250]: W0430 04:41:56.432405 3250 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 04:41:56.432472 kubelet[3250]: W0430 04:41:56.432464 3250 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 04:41:56.432945 kubelet[3250]: W0430 04:41:56.432495 3250 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 04:41:56.432945 kubelet[3250]: E0430 04:41:56.432570 3250 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.3-a-671b97f93d\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:56.432945 kubelet[3250]: E0430 04:41:56.432615 3250 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.3-a-671b97f93d\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:56.432945 kubelet[3250]: E0430 04:41:56.432622 3250 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.3-a-671b97f93d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" Apr 30 04:41:56.466468 kubelet[3250]: I0430 04:41:56.466404 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-671b97f93d" podStartSLOduration=1.466384803 podStartE2EDuration="1.466384803s" podCreationTimestamp="2025-04-30 04:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 04:41:56.466374925 +0000 UTC m=+1.092443268" watchObservedRunningTime="2025-04-30 04:41:56.466384803 +0000 UTC m=+1.092453137" Apr 30 04:41:56.479734 kubelet[3250]: I0430 04:41:56.479683 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-671b97f93d" podStartSLOduration=1.479660693 podStartE2EDuration="1.479660693s" podCreationTimestamp="2025-04-30 04:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 04:41:56.473531655 +0000 UTC m=+1.099599993" watchObservedRunningTime="2025-04-30 04:41:56.479660693 +0000 UTC m=+1.105729031" Apr 30 04:41:56.479881 kubelet[3250]: I0430 04:41:56.479844 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-671b97f93d" podStartSLOduration=3.479834649 podStartE2EDuration="3.479834649s" podCreationTimestamp="2025-04-30 04:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 04:41:56.479796293 +0000 UTC m=+1.105864631" watchObservedRunningTime="2025-04-30 04:41:56.479834649 +0000 UTC m=+1.105902978" Apr 30 04:41:56.513667 kubelet[3250]: I0430 04:41:56.513567 3250 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 04:42:00.204982 sudo[2095]: pam_unix(sudo:session): session closed for user root Apr 30 04:42:00.205827 sshd[2092]: pam_unix(sshd:session): session closed for user core Apr 30 04:42:00.207353 systemd[1]: sshd@9-147.75.90.169:22-139.178.68.195:59198.service: Deactivated successfully. Apr 30 04:42:00.208179 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 04:42:00.208265 systemd[1]: session-11.scope: Consumed 3.222s CPU time, 205.0M memory peak, 0B memory swap peak. Apr 30 04:42:00.208950 systemd-logind[1801]: Session 11 logged out. Waiting for processes to exit. Apr 30 04:42:00.209644 systemd-logind[1801]: Removed session 11. Apr 30 04:42:06.187526 update_engine[1806]: I20250430 04:42:06.187389 1806 update_attempter.cc:509] Updating boot flags... Apr 30 04:42:06.228268 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3421) Apr 30 04:42:06.254265 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3425) Apr 30 04:42:10.895176 kubelet[3250]: I0430 04:42:10.895148 3250 topology_manager.go:215] "Topology Admit Handler" podUID="80e57e82-6b2a-4e2a-9e74-b801cae12f73" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-gtccp" Apr 30 04:42:10.898174 systemd[1]: Created slice kubepods-besteffort-pod80e57e82_6b2a_4e2a_9e74_b801cae12f73.slice - libcontainer container kubepods-besteffort-pod80e57e82_6b2a_4e2a_9e74_b801cae12f73.slice. Apr 30 04:42:11.015141 kubelet[3250]: I0430 04:42:11.015011 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80e57e82-6b2a-4e2a-9e74-b801cae12f73-var-lib-calico\") pod \"tigera-operator-797db67f8-gtccp\" (UID: \"80e57e82-6b2a-4e2a-9e74-b801cae12f73\") " pod="tigera-operator/tigera-operator-797db67f8-gtccp" Apr 30 04:42:11.015474 kubelet[3250]: I0430 04:42:11.015162 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7dv\" (UniqueName: \"kubernetes.io/projected/80e57e82-6b2a-4e2a-9e74-b801cae12f73-kube-api-access-fd7dv\") pod \"tigera-operator-797db67f8-gtccp\" (UID: \"80e57e82-6b2a-4e2a-9e74-b801cae12f73\") " pod="tigera-operator/tigera-operator-797db67f8-gtccp" Apr 30 04:42:11.025645 kubelet[3250]: I0430 04:42:11.025548 3250 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 04:42:11.026486 containerd[1811]: time="2025-04-30T04:42:11.026373865Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 04:42:11.027344 kubelet[3250]: I0430 04:42:11.026898 3250 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 04:42:11.048091 kubelet[3250]: I0430 04:42:11.048015 3250 topology_manager.go:215] "Topology Admit Handler" podUID="5c96fbd6-4b09-4812-8bce-1b4e454df1bb" podNamespace="kube-system" podName="kube-proxy-47hcd" Apr 30 04:42:11.063516 systemd[1]: Created slice kubepods-besteffort-pod5c96fbd6_4b09_4812_8bce_1b4e454df1bb.slice - libcontainer container kubepods-besteffort-pod5c96fbd6_4b09_4812_8bce_1b4e454df1bb.slice. Apr 30 04:42:11.212165 containerd[1811]: time="2025-04-30T04:42:11.212037255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gtccp,Uid:80e57e82-6b2a-4e2a-9e74-b801cae12f73,Namespace:tigera-operator,Attempt:0,}" Apr 30 04:42:11.217295 kubelet[3250]: I0430 04:42:11.217276 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c96fbd6-4b09-4812-8bce-1b4e454df1bb-kube-proxy\") pod \"kube-proxy-47hcd\" (UID: \"5c96fbd6-4b09-4812-8bce-1b4e454df1bb\") " pod="kube-system/kube-proxy-47hcd" Apr 30 04:42:11.217338 kubelet[3250]: I0430 04:42:11.217300 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrtp\" (UniqueName: \"kubernetes.io/projected/5c96fbd6-4b09-4812-8bce-1b4e454df1bb-kube-api-access-7rrtp\") pod \"kube-proxy-47hcd\" (UID: \"5c96fbd6-4b09-4812-8bce-1b4e454df1bb\") " pod="kube-system/kube-proxy-47hcd" Apr 30 04:42:11.217338 kubelet[3250]: I0430 04:42:11.217312 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c96fbd6-4b09-4812-8bce-1b4e454df1bb-xtables-lock\") pod \"kube-proxy-47hcd\" (UID: \"5c96fbd6-4b09-4812-8bce-1b4e454df1bb\") " pod="kube-system/kube-proxy-47hcd" Apr 30 04:42:11.217338 kubelet[3250]: I0430 04:42:11.217322 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c96fbd6-4b09-4812-8bce-1b4e454df1bb-lib-modules\") pod \"kube-proxy-47hcd\" (UID: \"5c96fbd6-4b09-4812-8bce-1b4e454df1bb\") " pod="kube-system/kube-proxy-47hcd" Apr 30 04:42:11.223926 containerd[1811]: time="2025-04-30T04:42:11.223886398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:11.223926 containerd[1811]: time="2025-04-30T04:42:11.223915422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:11.223926 containerd[1811]: time="2025-04-30T04:42:11.223922370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:11.224045 containerd[1811]: time="2025-04-30T04:42:11.223963052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:11.249540 systemd[1]: Started cri-containerd-10cdc963ebf7bee3752ea673b38a4172ebec7c076d19bc2ae3d91a95c2d91942.scope - libcontainer container 10cdc963ebf7bee3752ea673b38a4172ebec7c076d19bc2ae3d91a95c2d91942. Apr 30 04:42:11.282564 containerd[1811]: time="2025-04-30T04:42:11.282502418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gtccp,Uid:80e57e82-6b2a-4e2a-9e74-b801cae12f73,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10cdc963ebf7bee3752ea673b38a4172ebec7c076d19bc2ae3d91a95c2d91942\"" Apr 30 04:42:11.283748 containerd[1811]: time="2025-04-30T04:42:11.283728295Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 04:42:11.370157 containerd[1811]: time="2025-04-30T04:42:11.370043494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47hcd,Uid:5c96fbd6-4b09-4812-8bce-1b4e454df1bb,Namespace:kube-system,Attempt:0,}" Apr 30 04:42:11.380233 containerd[1811]: time="2025-04-30T04:42:11.380185673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:11.380478 containerd[1811]: time="2025-04-30T04:42:11.380442869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:11.380522 containerd[1811]: time="2025-04-30T04:42:11.380480307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:11.380557 containerd[1811]: time="2025-04-30T04:42:11.380542869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:11.408558 systemd[1]: Started cri-containerd-d84fe87c1c8e2464d9773f6264c65811b0ee1c72415179768d2accb31a4dcd29.scope - libcontainer container d84fe87c1c8e2464d9773f6264c65811b0ee1c72415179768d2accb31a4dcd29. Apr 30 04:42:11.425672 containerd[1811]: time="2025-04-30T04:42:11.425605965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47hcd,Uid:5c96fbd6-4b09-4812-8bce-1b4e454df1bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d84fe87c1c8e2464d9773f6264c65811b0ee1c72415179768d2accb31a4dcd29\"" Apr 30 04:42:11.428077 containerd[1811]: time="2025-04-30T04:42:11.428042985Z" level=info msg="CreateContainer within sandbox \"d84fe87c1c8e2464d9773f6264c65811b0ee1c72415179768d2accb31a4dcd29\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 04:42:11.436126 containerd[1811]: time="2025-04-30T04:42:11.436075277Z" level=info msg="CreateContainer within sandbox \"d84fe87c1c8e2464d9773f6264c65811b0ee1c72415179768d2accb31a4dcd29\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2942fc8c19bb813c835e8a93c90a89cecf204f4713734faff6a7e03e62a3f8bd\"" Apr 30 04:42:11.436392 containerd[1811]: time="2025-04-30T04:42:11.436378274Z" level=info msg="StartContainer for \"2942fc8c19bb813c835e8a93c90a89cecf204f4713734faff6a7e03e62a3f8bd\"" Apr 30 04:42:11.460396 systemd[1]: Started cri-containerd-2942fc8c19bb813c835e8a93c90a89cecf204f4713734faff6a7e03e62a3f8bd.scope - libcontainer container 2942fc8c19bb813c835e8a93c90a89cecf204f4713734faff6a7e03e62a3f8bd. Apr 30 04:42:11.477307 containerd[1811]: time="2025-04-30T04:42:11.477227990Z" level=info msg="StartContainer for \"2942fc8c19bb813c835e8a93c90a89cecf204f4713734faff6a7e03e62a3f8bd\" returns successfully" Apr 30 04:42:12.464658 kubelet[3250]: I0430 04:42:12.464592 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-47hcd" podStartSLOduration=1.4645812089999999 podStartE2EDuration="1.464581209s" podCreationTimestamp="2025-04-30 04:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 04:42:12.464479663 +0000 UTC m=+17.090547982" watchObservedRunningTime="2025-04-30 04:42:12.464581209 +0000 UTC m=+17.090649523" Apr 30 04:42:15.104203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388901341.mount: Deactivated successfully. Apr 30 04:42:15.716905 containerd[1811]: time="2025-04-30T04:42:15.716851187Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:15.717126 containerd[1811]: time="2025-04-30T04:42:15.717037210Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 04:42:15.717451 containerd[1811]: time="2025-04-30T04:42:15.717410720Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:15.718471 containerd[1811]: time="2025-04-30T04:42:15.718429069Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:15.719275 containerd[1811]: time="2025-04-30T04:42:15.719230260Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 4.435480613s" Apr 30 04:42:15.719275 containerd[1811]: time="2025-04-30T04:42:15.719247024Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 04:42:15.720241 containerd[1811]: time="2025-04-30T04:42:15.720201720Z" level=info msg="CreateContainer within sandbox \"10cdc963ebf7bee3752ea673b38a4172ebec7c076d19bc2ae3d91a95c2d91942\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 04:42:15.724206 containerd[1811]: time="2025-04-30T04:42:15.724159043Z" level=info msg="CreateContainer within sandbox \"10cdc963ebf7bee3752ea673b38a4172ebec7c076d19bc2ae3d91a95c2d91942\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bc0a98351e4958291f3df2d86b98b8a1986c4d344946e488983e23811a1b5a5e\"" Apr 30 04:42:15.724443 containerd[1811]: time="2025-04-30T04:42:15.724430205Z" level=info msg="StartContainer for \"bc0a98351e4958291f3df2d86b98b8a1986c4d344946e488983e23811a1b5a5e\"" Apr 30 04:42:15.753559 systemd[1]: Started cri-containerd-bc0a98351e4958291f3df2d86b98b8a1986c4d344946e488983e23811a1b5a5e.scope - libcontainer container bc0a98351e4958291f3df2d86b98b8a1986c4d344946e488983e23811a1b5a5e. Apr 30 04:42:15.763807 containerd[1811]: time="2025-04-30T04:42:15.763786615Z" level=info msg="StartContainer for \"bc0a98351e4958291f3df2d86b98b8a1986c4d344946e488983e23811a1b5a5e\" returns successfully" Apr 30 04:42:16.486171 kubelet[3250]: I0430 04:42:16.486068 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-gtccp" podStartSLOduration=2.049855848 podStartE2EDuration="6.486034749s" podCreationTimestamp="2025-04-30 04:42:10 +0000 UTC" firstStartedPulling="2025-04-30 04:42:11.283456948 +0000 UTC m=+15.909525279" lastFinishedPulling="2025-04-30 04:42:15.719635864 +0000 UTC m=+20.345704180" observedRunningTime="2025-04-30 04:42:16.485891516 +0000 UTC m=+21.111959900" watchObservedRunningTime="2025-04-30 04:42:16.486034749 +0000 UTC m=+21.112103133" Apr 30 04:42:18.494004 kubelet[3250]: I0430 04:42:18.493919 3250 topology_manager.go:215] "Topology Admit Handler" podUID="87a8b860-46c8-4af8-9e38-83b41de381dd" podNamespace="calico-system" podName="calico-typha-56486489b4-57p6l" Apr 30 04:42:18.509777 systemd[1]: Created slice kubepods-besteffort-pod87a8b860_46c8_4af8_9e38_83b41de381dd.slice - libcontainer container kubepods-besteffort-pod87a8b860_46c8_4af8_9e38_83b41de381dd.slice. Apr 30 04:42:18.539311 kubelet[3250]: I0430 04:42:18.539279 3250 topology_manager.go:215] "Topology Admit Handler" podUID="8737f38a-3e26-4bc2-96bd-27f9ab827574" podNamespace="calico-system" podName="calico-node-kw2n8" Apr 30 04:42:18.542968 systemd[1]: Created slice kubepods-besteffort-pod8737f38a_3e26_4bc2_96bd_27f9ab827574.slice - libcontainer container kubepods-besteffort-pod8737f38a_3e26_4bc2_96bd_27f9ab827574.slice. Apr 30 04:42:18.566788 kubelet[3250]: I0430 04:42:18.566766 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a8b860-46c8-4af8-9e38-83b41de381dd-tigera-ca-bundle\") pod \"calico-typha-56486489b4-57p6l\" (UID: \"87a8b860-46c8-4af8-9e38-83b41de381dd\") " pod="calico-system/calico-typha-56486489b4-57p6l" Apr 30 04:42:18.566788 kubelet[3250]: I0430 04:42:18.566790 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/87a8b860-46c8-4af8-9e38-83b41de381dd-typha-certs\") pod \"calico-typha-56486489b4-57p6l\" (UID: \"87a8b860-46c8-4af8-9e38-83b41de381dd\") " pod="calico-system/calico-typha-56486489b4-57p6l" Apr 30 04:42:18.566890 kubelet[3250]: I0430 04:42:18.566801 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8737f38a-3e26-4bc2-96bd-27f9ab827574-node-certs\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566890 kubelet[3250]: I0430 04:42:18.566810 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-cni-log-dir\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566890 kubelet[3250]: I0430 04:42:18.566819 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqld\" (UniqueName: \"kubernetes.io/projected/8737f38a-3e26-4bc2-96bd-27f9ab827574-kube-api-access-wgqld\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566890 kubelet[3250]: I0430 04:42:18.566828 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-cni-bin-dir\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566890 kubelet[3250]: I0430 04:42:18.566836 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-flexvol-driver-host\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566978 kubelet[3250]: I0430 04:42:18.566845 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-lib-modules\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566978 kubelet[3250]: I0430 04:42:18.566853 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-xtables-lock\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566978 kubelet[3250]: I0430 04:42:18.566862 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-var-run-calico\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.566978 kubelet[3250]: I0430 04:42:18.566878 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhq2z\" (UniqueName: \"kubernetes.io/projected/87a8b860-46c8-4af8-9e38-83b41de381dd-kube-api-access-hhq2z\") pod \"calico-typha-56486489b4-57p6l\" (UID: \"87a8b860-46c8-4af8-9e38-83b41de381dd\") " pod="calico-system/calico-typha-56486489b4-57p6l" Apr 30 04:42:18.566978 kubelet[3250]: I0430 04:42:18.566888 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-cni-net-dir\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.567058 kubelet[3250]: I0430 04:42:18.566897 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-policysync\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.567058 kubelet[3250]: I0430 04:42:18.566906 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8737f38a-3e26-4bc2-96bd-27f9ab827574-var-lib-calico\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.567058 kubelet[3250]: I0430 04:42:18.566915 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8737f38a-3e26-4bc2-96bd-27f9ab827574-tigera-ca-bundle\") pod \"calico-node-kw2n8\" (UID: \"8737f38a-3e26-4bc2-96bd-27f9ab827574\") " pod="calico-system/calico-node-kw2n8" Apr 30 04:42:18.667511 kubelet[3250]: I0430 04:42:18.667423 3250 topology_manager.go:215] "Topology Admit Handler" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" podNamespace="calico-system" podName="csi-node-driver-d89lc" Apr 30 04:42:18.668569 kubelet[3250]: E0430 04:42:18.668482 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d89lc" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" Apr 30 04:42:18.672373 kubelet[3250]: E0430 04:42:18.672113 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.672373 kubelet[3250]: W0430 04:42:18.672196 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.673173 kubelet[3250]: E0430 04:42:18.673071 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.673926 kubelet[3250]: E0430 04:42:18.673862 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.673926 kubelet[3250]: W0430 04:42:18.673916 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.674451 kubelet[3250]: E0430 04:42:18.673975 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.677588 kubelet[3250]: E0430 04:42:18.677529 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.677588 kubelet[3250]: W0430 04:42:18.677587 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.678072 kubelet[3250]: E0430 04:42:18.677660 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.678427 kubelet[3250]: E0430 04:42:18.678371 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.678625 kubelet[3250]: W0430 04:42:18.678420 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.678625 kubelet[3250]: E0430 04:42:18.678475 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.694666 kubelet[3250]: E0430 04:42:18.694607 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.694666 kubelet[3250]: W0430 04:42:18.694665 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.695072 kubelet[3250]: E0430 04:42:18.694737 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.695395 kubelet[3250]: E0430 04:42:18.695357 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.695525 kubelet[3250]: W0430 04:42:18.695398 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.695525 kubelet[3250]: E0430 04:42:18.695444 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.767217 kubelet[3250]: E0430 04:42:18.767007 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.767217 kubelet[3250]: W0430 04:42:18.767053 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.767217 kubelet[3250]: E0430 04:42:18.767095 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.767865 kubelet[3250]: E0430 04:42:18.767728 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.767865 kubelet[3250]: W0430 04:42:18.767766 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.767865 kubelet[3250]: E0430 04:42:18.767807 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.768440 kubelet[3250]: E0430 04:42:18.768357 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.768440 kubelet[3250]: W0430 04:42:18.768388 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.768440 kubelet[3250]: E0430 04:42:18.768418 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.769070 kubelet[3250]: E0430 04:42:18.768986 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.769070 kubelet[3250]: W0430 04:42:18.769024 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.769070 kubelet[3250]: E0430 04:42:18.769059 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.769775 kubelet[3250]: E0430 04:42:18.769694 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.769775 kubelet[3250]: W0430 04:42:18.769731 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.769775 kubelet[3250]: E0430 04:42:18.769765 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.770565 kubelet[3250]: E0430 04:42:18.770490 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.770565 kubelet[3250]: W0430 04:42:18.770519 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.770565 kubelet[3250]: E0430 04:42:18.770548 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.771155 kubelet[3250]: E0430 04:42:18.771066 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.771155 kubelet[3250]: W0430 04:42:18.771111 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.771740 kubelet[3250]: E0430 04:42:18.771161 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.771944 kubelet[3250]: E0430 04:42:18.771782 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.771944 kubelet[3250]: W0430 04:42:18.771819 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.771944 kubelet[3250]: E0430 04:42:18.771852 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.772528 kubelet[3250]: E0430 04:42:18.772451 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.772528 kubelet[3250]: W0430 04:42:18.772487 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.772528 kubelet[3250]: E0430 04:42:18.772520 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.773086 kubelet[3250]: E0430 04:42:18.773010 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.773086 kubelet[3250]: W0430 04:42:18.773039 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.773086 kubelet[3250]: E0430 04:42:18.773068 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.773631 kubelet[3250]: E0430 04:42:18.773550 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.773631 kubelet[3250]: W0430 04:42:18.773578 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.773631 kubelet[3250]: E0430 04:42:18.773605 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.774148 kubelet[3250]: E0430 04:42:18.774071 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.774148 kubelet[3250]: W0430 04:42:18.774098 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.774148 kubelet[3250]: E0430 04:42:18.774125 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.774721 kubelet[3250]: E0430 04:42:18.774683 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.774852 kubelet[3250]: W0430 04:42:18.774725 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.774852 kubelet[3250]: E0430 04:42:18.774767 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.775431 kubelet[3250]: E0430 04:42:18.775344 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.775431 kubelet[3250]: W0430 04:42:18.775387 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.775431 kubelet[3250]: E0430 04:42:18.775432 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.776009 kubelet[3250]: E0430 04:42:18.775966 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.776009 kubelet[3250]: W0430 04:42:18.776002 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.776442 kubelet[3250]: E0430 04:42:18.776032 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.776632 kubelet[3250]: E0430 04:42:18.776599 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.776632 kubelet[3250]: W0430 04:42:18.776628 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.776831 kubelet[3250]: E0430 04:42:18.776657 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.777208 kubelet[3250]: E0430 04:42:18.777178 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.777358 kubelet[3250]: W0430 04:42:18.777206 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.777358 kubelet[3250]: E0430 04:42:18.777233 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.777821 kubelet[3250]: E0430 04:42:18.777743 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.777821 kubelet[3250]: W0430 04:42:18.777773 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.777821 kubelet[3250]: E0430 04:42:18.777800 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.778361 kubelet[3250]: E0430 04:42:18.778252 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.778361 kubelet[3250]: W0430 04:42:18.778315 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.778361 kubelet[3250]: E0430 04:42:18.778342 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.778897 kubelet[3250]: E0430 04:42:18.778810 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.778897 kubelet[3250]: W0430 04:42:18.778838 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.778897 kubelet[3250]: E0430 04:42:18.778864 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.779908 kubelet[3250]: E0430 04:42:18.779827 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.779908 kubelet[3250]: W0430 04:42:18.779869 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.780254 kubelet[3250]: E0430 04:42:18.779916 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.780254 kubelet[3250]: I0430 04:42:18.780011 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0582c164-a1e7-4b75-a502-4ea70094f195-kubelet-dir\") pod \"csi-node-driver-d89lc\" (UID: \"0582c164-a1e7-4b75-a502-4ea70094f195\") " pod="calico-system/csi-node-driver-d89lc" Apr 30 04:42:18.780665 kubelet[3250]: E0430 04:42:18.780611 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.780665 kubelet[3250]: W0430 04:42:18.780649 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.781007 kubelet[3250]: E0430 04:42:18.780689 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.781007 kubelet[3250]: I0430 04:42:18.780749 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0582c164-a1e7-4b75-a502-4ea70094f195-varrun\") pod \"csi-node-driver-d89lc\" (UID: \"0582c164-a1e7-4b75-a502-4ea70094f195\") " pod="calico-system/csi-node-driver-d89lc" Apr 30 04:42:18.781528 kubelet[3250]: E0430 04:42:18.781445 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.781528 kubelet[3250]: W0430 04:42:18.781485 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.781528 kubelet[3250]: E0430 04:42:18.781531 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.782186 kubelet[3250]: E0430 04:42:18.782105 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.782186 kubelet[3250]: W0430 04:42:18.782143 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.782186 kubelet[3250]: E0430 04:42:18.782187 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.782883 kubelet[3250]: E0430 04:42:18.782799 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.782883 kubelet[3250]: W0430 04:42:18.782837 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.782883 kubelet[3250]: E0430 04:42:18.782879 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.783237 kubelet[3250]: I0430 04:42:18.782936 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0582c164-a1e7-4b75-a502-4ea70094f195-registration-dir\") pod \"csi-node-driver-d89lc\" (UID: \"0582c164-a1e7-4b75-a502-4ea70094f195\") " pod="calico-system/csi-node-driver-d89lc" Apr 30 04:42:18.783793 kubelet[3250]: E0430 04:42:18.783694 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.783793 kubelet[3250]: W0430 04:42:18.783738 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.783793 kubelet[3250]: E0430 04:42:18.783782 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.784430 kubelet[3250]: E0430 04:42:18.784337 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.784430 kubelet[3250]: W0430 04:42:18.784370 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.784430 kubelet[3250]: E0430 04:42:18.784419 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.785061 kubelet[3250]: E0430 04:42:18.785011 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.785061 kubelet[3250]: W0430 04:42:18.785042 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.785437 kubelet[3250]: E0430 04:42:18.785081 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.785437 kubelet[3250]: I0430 04:42:18.785139 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0582c164-a1e7-4b75-a502-4ea70094f195-socket-dir\") pod \"csi-node-driver-d89lc\" (UID: \"0582c164-a1e7-4b75-a502-4ea70094f195\") " pod="calico-system/csi-node-driver-d89lc" Apr 30 04:42:18.785858 kubelet[3250]: E0430 04:42:18.785756 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.785858 kubelet[3250]: W0430 04:42:18.785799 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.785858 kubelet[3250]: E0430 04:42:18.785845 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.786403 kubelet[3250]: E0430 04:42:18.786318 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.786403 kubelet[3250]: W0430 04:42:18.786346 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.786403 kubelet[3250]: E0430 04:42:18.786381 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.786972 kubelet[3250]: E0430 04:42:18.786884 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.786972 kubelet[3250]: W0430 04:42:18.786922 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.786972 kubelet[3250]: E0430 04:42:18.786966 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.787434 kubelet[3250]: I0430 04:42:18.787024 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btlzw\" (UniqueName: \"kubernetes.io/projected/0582c164-a1e7-4b75-a502-4ea70094f195-kube-api-access-btlzw\") pod \"csi-node-driver-d89lc\" (UID: \"0582c164-a1e7-4b75-a502-4ea70094f195\") " pod="calico-system/csi-node-driver-d89lc" Apr 30 04:42:18.787704 kubelet[3250]: E0430 04:42:18.787617 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.787704 kubelet[3250]: W0430 04:42:18.787660 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.787704 kubelet[3250]: E0430 04:42:18.787704 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.788291 kubelet[3250]: E0430 04:42:18.788224 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.788424 kubelet[3250]: W0430 04:42:18.788285 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.788424 kubelet[3250]: E0430 04:42:18.788346 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.789068 kubelet[3250]: E0430 04:42:18.789029 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.789167 kubelet[3250]: W0430 04:42:18.789075 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.789167 kubelet[3250]: E0430 04:42:18.789127 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.789873 kubelet[3250]: E0430 04:42:18.789794 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.789873 kubelet[3250]: W0430 04:42:18.789832 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.789873 kubelet[3250]: E0430 04:42:18.789866 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.815787 containerd[1811]: time="2025-04-30T04:42:18.815703984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56486489b4-57p6l,Uid:87a8b860-46c8-4af8-9e38-83b41de381dd,Namespace:calico-system,Attempt:0,}" Apr 30 04:42:18.827997 containerd[1811]: time="2025-04-30T04:42:18.827920389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:18.827997 containerd[1811]: time="2025-04-30T04:42:18.827953776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:18.827997 containerd[1811]: time="2025-04-30T04:42:18.827964645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:18.828106 containerd[1811]: time="2025-04-30T04:42:18.828013085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:18.845111 containerd[1811]: time="2025-04-30T04:42:18.845090206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kw2n8,Uid:8737f38a-3e26-4bc2-96bd-27f9ab827574,Namespace:calico-system,Attempt:0,}" Apr 30 04:42:18.847416 systemd[1]: Started cri-containerd-1879561de82780c7f54827990c2569e1316e4c331488ba4bcdd6891446af57f6.scope - libcontainer container 1879561de82780c7f54827990c2569e1316e4c331488ba4bcdd6891446af57f6. Apr 30 04:42:18.854169 containerd[1811]: time="2025-04-30T04:42:18.853938064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:18.854169 containerd[1811]: time="2025-04-30T04:42:18.854155883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:18.854169 containerd[1811]: time="2025-04-30T04:42:18.854163852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:18.854284 containerd[1811]: time="2025-04-30T04:42:18.854202207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:18.860001 systemd[1]: Started cri-containerd-b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091.scope - libcontainer container b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091. Apr 30 04:42:18.869717 containerd[1811]: time="2025-04-30T04:42:18.869693223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kw2n8,Uid:8737f38a-3e26-4bc2-96bd-27f9ab827574,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091\"" Apr 30 04:42:18.869995 containerd[1811]: time="2025-04-30T04:42:18.869979786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56486489b4-57p6l,Uid:87a8b860-46c8-4af8-9e38-83b41de381dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"1879561de82780c7f54827990c2569e1316e4c331488ba4bcdd6891446af57f6\"" Apr 30 04:42:18.870425 containerd[1811]: time="2025-04-30T04:42:18.870411401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 04:42:18.888285 kubelet[3250]: E0430 04:42:18.888227 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.888285 kubelet[3250]: W0430 04:42:18.888239 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.888285 kubelet[3250]: E0430 04:42:18.888250 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.888394 kubelet[3250]: E0430 04:42:18.888380 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.888394 kubelet[3250]: W0430 04:42:18.888389 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.888437 kubelet[3250]: E0430 04:42:18.888397 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.888538 kubelet[3250]: E0430 04:42:18.888503 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.888538 kubelet[3250]: W0430 04:42:18.888510 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.888538 kubelet[3250]: E0430 04:42:18.888517 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.888651 kubelet[3250]: E0430 04:42:18.888612 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.888651 kubelet[3250]: W0430 04:42:18.888618 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.888651 kubelet[3250]: E0430 04:42:18.888626 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.888723 kubelet[3250]: E0430 04:42:18.888709 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.888723 kubelet[3250]: W0430 04:42:18.888714 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.888723 kubelet[3250]: E0430 04:42:18.888721 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.888838 kubelet[3250]: E0430 04:42:18.888802 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.888838 kubelet[3250]: W0430 04:42:18.888807 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.888838 kubelet[3250]: E0430 04:42:18.888815 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.888933 kubelet[3250]: E0430 04:42:18.888926 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.888933 kubelet[3250]: W0430 04:42:18.888930 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.888968 kubelet[3250]: E0430 04:42:18.888936 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889081 kubelet[3250]: E0430 04:42:18.889042 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889081 kubelet[3250]: W0430 04:42:18.889047 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889081 kubelet[3250]: E0430 04:42:18.889053 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889156 kubelet[3250]: E0430 04:42:18.889150 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889156 kubelet[3250]: W0430 04:42:18.889156 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889196 kubelet[3250]: E0430 04:42:18.889174 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889252 kubelet[3250]: E0430 04:42:18.889247 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889282 kubelet[3250]: W0430 04:42:18.889252 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889282 kubelet[3250]: E0430 04:42:18.889264 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889353 kubelet[3250]: E0430 04:42:18.889348 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889371 kubelet[3250]: W0430 04:42:18.889353 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889371 kubelet[3250]: E0430 04:42:18.889359 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889467 kubelet[3250]: E0430 04:42:18.889462 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889484 kubelet[3250]: W0430 04:42:18.889467 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889484 kubelet[3250]: E0430 04:42:18.889474 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889566 kubelet[3250]: E0430 04:42:18.889561 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889584 kubelet[3250]: W0430 04:42:18.889566 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889584 kubelet[3250]: E0430 04:42:18.889578 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889642 kubelet[3250]: E0430 04:42:18.889637 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889662 kubelet[3250]: W0430 04:42:18.889642 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889662 kubelet[3250]: E0430 04:42:18.889653 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889717 kubelet[3250]: E0430 04:42:18.889712 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889734 kubelet[3250]: W0430 04:42:18.889717 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889734 kubelet[3250]: E0430 04:42:18.889727 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889793 kubelet[3250]: E0430 04:42:18.889788 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889810 kubelet[3250]: W0430 04:42:18.889793 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889810 kubelet[3250]: E0430 04:42:18.889798 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889870 kubelet[3250]: E0430 04:42:18.889865 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889887 kubelet[3250]: W0430 04:42:18.889870 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889887 kubelet[3250]: E0430 04:42:18.889876 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.889953 kubelet[3250]: E0430 04:42:18.889948 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.889970 kubelet[3250]: W0430 04:42:18.889953 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.889970 kubelet[3250]: E0430 04:42:18.889959 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.890043 kubelet[3250]: E0430 04:42:18.890038 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.890060 kubelet[3250]: W0430 04:42:18.890043 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.890060 kubelet[3250]: E0430 04:42:18.890048 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.890182 kubelet[3250]: E0430 04:42:18.890177 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.890200 kubelet[3250]: W0430 04:42:18.890182 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.890200 kubelet[3250]: E0430 04:42:18.890188 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.890330 kubelet[3250]: E0430 04:42:18.890324 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.890330 kubelet[3250]: W0430 04:42:18.890329 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.890369 kubelet[3250]: E0430 04:42:18.890335 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.890412 kubelet[3250]: E0430 04:42:18.890407 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.890431 kubelet[3250]: W0430 04:42:18.890413 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.890431 kubelet[3250]: E0430 04:42:18.890419 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.890501 kubelet[3250]: E0430 04:42:18.890496 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.890520 kubelet[3250]: W0430 04:42:18.890501 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.890520 kubelet[3250]: E0430 04:42:18.890508 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.890638 kubelet[3250]: E0430 04:42:18.890633 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.890656 kubelet[3250]: W0430 04:42:18.890639 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.890656 kubelet[3250]: E0430 04:42:18.890645 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.890732 kubelet[3250]: E0430 04:42:18.890727 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.890750 kubelet[3250]: W0430 04:42:18.890732 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.890750 kubelet[3250]: E0430 04:42:18.890737 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:18.895691 kubelet[3250]: E0430 04:42:18.895651 3250 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 04:42:18.895691 kubelet[3250]: W0430 04:42:18.895659 3250 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 04:42:18.895691 kubelet[3250]: E0430 04:42:18.895666 3250 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 04:42:20.418928 kubelet[3250]: E0430 04:42:20.418803 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d89lc" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" Apr 30 04:42:20.533761 containerd[1811]: time="2025-04-30T04:42:20.533735365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:20.533999 containerd[1811]: time="2025-04-30T04:42:20.533979653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 04:42:20.534387 containerd[1811]: time="2025-04-30T04:42:20.534335157Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:20.535342 containerd[1811]: time="2025-04-30T04:42:20.535302520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:20.535710 containerd[1811]: time="2025-04-30T04:42:20.535699502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.665269539s" Apr 30 04:42:20.535737 containerd[1811]: time="2025-04-30T04:42:20.535714183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 04:42:20.536659 containerd[1811]: time="2025-04-30T04:42:20.536639048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 04:42:20.537435 containerd[1811]: time="2025-04-30T04:42:20.537421751Z" level=info msg="CreateContainer within sandbox \"b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 04:42:20.542542 containerd[1811]: time="2025-04-30T04:42:20.542495601Z" level=info msg="CreateContainer within sandbox \"b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9\"" Apr 30 04:42:20.542734 containerd[1811]: time="2025-04-30T04:42:20.542700686Z" level=info msg="StartContainer for \"fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9\"" Apr 30 04:42:20.569795 systemd[1]: Started cri-containerd-fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9.scope - libcontainer container fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9. Apr 30 04:42:20.621578 containerd[1811]: time="2025-04-30T04:42:20.621534867Z" level=info msg="StartContainer for \"fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9\" returns successfully" Apr 30 04:42:20.633348 systemd[1]: cri-containerd-fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9.scope: Deactivated successfully. Apr 30 04:42:20.678675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9-rootfs.mount: Deactivated successfully. Apr 30 04:42:20.869847 containerd[1811]: time="2025-04-30T04:42:20.869813064Z" level=info msg="shim disconnected" id=fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9 namespace=k8s.io Apr 30 04:42:20.869847 containerd[1811]: time="2025-04-30T04:42:20.869844338Z" level=warning msg="cleaning up after shim disconnected" id=fe892c9388c8765a0c0fedd61ca50ac4173baf796d3d8242ffb8dec858a233e9 namespace=k8s.io Apr 30 04:42:20.869847 containerd[1811]: time="2025-04-30T04:42:20.869850140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 04:42:22.418479 kubelet[3250]: E0430 04:42:22.418346 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d89lc" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" Apr 30 04:42:22.789429 containerd[1811]: time="2025-04-30T04:42:22.789381579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:22.789627 containerd[1811]: time="2025-04-30T04:42:22.789609022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 04:42:22.789966 containerd[1811]: time="2025-04-30T04:42:22.789924411Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:22.790839 containerd[1811]: time="2025-04-30T04:42:22.790798693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:22.791281 containerd[1811]: time="2025-04-30T04:42:22.791236100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.254514493s" Apr 30 04:42:22.791281 containerd[1811]: time="2025-04-30T04:42:22.791252383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 04:42:22.791692 containerd[1811]: time="2025-04-30T04:42:22.791679128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 04:42:22.794771 containerd[1811]: time="2025-04-30T04:42:22.794715761Z" level=info msg="CreateContainer within sandbox \"1879561de82780c7f54827990c2569e1316e4c331488ba4bcdd6891446af57f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 04:42:22.799444 containerd[1811]: time="2025-04-30T04:42:22.799397817Z" level=info msg="CreateContainer within sandbox \"1879561de82780c7f54827990c2569e1316e4c331488ba4bcdd6891446af57f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a36f436e3a59e19faa142c7e3a62276f8fc38728534978b0d7b6f697e1f8febb\"" Apr 30 04:42:22.799680 containerd[1811]: time="2025-04-30T04:42:22.799639183Z" level=info msg="StartContainer for \"a36f436e3a59e19faa142c7e3a62276f8fc38728534978b0d7b6f697e1f8febb\"" Apr 30 04:42:22.826426 systemd[1]: Started cri-containerd-a36f436e3a59e19faa142c7e3a62276f8fc38728534978b0d7b6f697e1f8febb.scope - libcontainer container a36f436e3a59e19faa142c7e3a62276f8fc38728534978b0d7b6f697e1f8febb. Apr 30 04:42:22.850239 containerd[1811]: time="2025-04-30T04:42:22.850217465Z" level=info msg="StartContainer for \"a36f436e3a59e19faa142c7e3a62276f8fc38728534978b0d7b6f697e1f8febb\" returns successfully" Apr 30 04:42:23.519824 kubelet[3250]: I0430 04:42:23.519688 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56486489b4-57p6l" podStartSLOduration=1.598462872 podStartE2EDuration="5.519650804s" podCreationTimestamp="2025-04-30 04:42:18 +0000 UTC" firstStartedPulling="2025-04-30 04:42:18.870389495 +0000 UTC m=+23.496457810" lastFinishedPulling="2025-04-30 04:42:22.791577427 +0000 UTC m=+27.417645742" observedRunningTime="2025-04-30 04:42:23.518985999 +0000 UTC m=+28.145054383" watchObservedRunningTime="2025-04-30 04:42:23.519650804 +0000 UTC m=+28.145719173" Apr 30 04:42:24.418600 kubelet[3250]: E0430 04:42:24.418452 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d89lc" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" Apr 30 04:42:24.499531 kubelet[3250]: I0430 04:42:24.499477 3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 04:42:26.418832 kubelet[3250]: E0430 04:42:26.418798 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d89lc" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" Apr 30 04:42:26.743988 containerd[1811]: time="2025-04-30T04:42:26.743913189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:26.744177 containerd[1811]: time="2025-04-30T04:42:26.744142192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 04:42:26.744418 containerd[1811]: time="2025-04-30T04:42:26.744378654Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:26.745513 containerd[1811]: time="2025-04-30T04:42:26.745473019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:26.746276 containerd[1811]: time="2025-04-30T04:42:26.746230709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.954535491s" Apr 30 04:42:26.746276 containerd[1811]: time="2025-04-30T04:42:26.746244907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 04:42:26.747261 containerd[1811]: time="2025-04-30T04:42:26.747240983Z" level=info msg="CreateContainer within sandbox \"b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 04:42:26.751842 containerd[1811]: time="2025-04-30T04:42:26.751827228Z" level=info msg="CreateContainer within sandbox \"b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e\"" Apr 30 04:42:26.752113 containerd[1811]: time="2025-04-30T04:42:26.752067716Z" level=info msg="StartContainer for \"c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e\"" Apr 30 04:42:26.776565 systemd[1]: Started cri-containerd-c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e.scope - libcontainer container c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e. Apr 30 04:42:26.788967 containerd[1811]: time="2025-04-30T04:42:26.788943726Z" level=info msg="StartContainer for \"c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e\" returns successfully" Apr 30 04:42:27.316643 systemd[1]: cri-containerd-c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e.scope: Deactivated successfully. Apr 30 04:42:27.328004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e-rootfs.mount: Deactivated successfully. Apr 30 04:42:27.352434 kubelet[3250]: I0430 04:42:27.352345 3250 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 04:42:27.388210 kubelet[3250]: I0430 04:42:27.388098 3250 topology_manager.go:215] "Topology Admit Handler" podUID="4d70bcf9-c697-4f69-b9f6-124c322e75ea" podNamespace="kube-system" podName="coredns-7db6d8ff4d-79mvm" Apr 30 04:42:27.389813 kubelet[3250]: I0430 04:42:27.389719 3250 topology_manager.go:215] "Topology Admit Handler" podUID="1babe518-7873-4e58-95dd-06aadbc220aa" podNamespace="calico-system" podName="calico-kube-controllers-75dd59b6b7-kxd8k" Apr 30 04:42:27.390846 kubelet[3250]: I0430 04:42:27.390783 3250 topology_manager.go:215] "Topology Admit Handler" podUID="9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tx9ch" Apr 30 04:42:27.391970 kubelet[3250]: I0430 04:42:27.391888 3250 topology_manager.go:215] "Topology Admit Handler" podUID="9f73889e-3589-43d5-a030-03e0590e642a" podNamespace="calico-apiserver" podName="calico-apiserver-5868569d6c-fwfcx" Apr 30 04:42:27.393113 kubelet[3250]: I0430 04:42:27.392995 3250 topology_manager.go:215] "Topology Admit Handler" podUID="eb875b09-c72b-4971-9fb3-a7ce0430d1b5" podNamespace="calico-apiserver" podName="calico-apiserver-5868569d6c-plw6m" Apr 30 04:42:27.407034 systemd[1]: Created slice kubepods-burstable-pod4d70bcf9_c697_4f69_b9f6_124c322e75ea.slice - libcontainer container kubepods-burstable-pod4d70bcf9_c697_4f69_b9f6_124c322e75ea.slice. Apr 30 04:42:27.420009 systemd[1]: Created slice kubepods-besteffort-pod1babe518_7873_4e58_95dd_06aadbc220aa.slice - libcontainer container kubepods-besteffort-pod1babe518_7873_4e58_95dd_06aadbc220aa.slice. Apr 30 04:42:27.429036 systemd[1]: Created slice kubepods-burstable-pod9d4fdb4c_763d_4e1c_b8ab_5cdb6a665269.slice - libcontainer container kubepods-burstable-pod9d4fdb4c_763d_4e1c_b8ab_5cdb6a665269.slice. Apr 30 04:42:27.435587 systemd[1]: Created slice kubepods-besteffort-pod9f73889e_3589_43d5_a030_03e0590e642a.slice - libcontainer container kubepods-besteffort-pod9f73889e_3589_43d5_a030_03e0590e642a.slice. Apr 30 04:42:27.439837 systemd[1]: Created slice kubepods-besteffort-podeb875b09_c72b_4971_9fb3_a7ce0430d1b5.slice - libcontainer container kubepods-besteffort-podeb875b09_c72b_4971_9fb3_a7ce0430d1b5.slice. Apr 30 04:42:27.453919 kubelet[3250]: I0430 04:42:27.453896 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sm4k\" (UniqueName: \"kubernetes.io/projected/eb875b09-c72b-4971-9fb3-a7ce0430d1b5-kube-api-access-7sm4k\") pod \"calico-apiserver-5868569d6c-plw6m\" (UID: \"eb875b09-c72b-4971-9fb3-a7ce0430d1b5\") " pod="calico-apiserver/calico-apiserver-5868569d6c-plw6m" Apr 30 04:42:27.453919 kubelet[3250]: I0430 04:42:27.453922 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n7rk\" (UniqueName: \"kubernetes.io/projected/4d70bcf9-c697-4f69-b9f6-124c322e75ea-kube-api-access-9n7rk\") pod \"coredns-7db6d8ff4d-79mvm\" (UID: \"4d70bcf9-c697-4f69-b9f6-124c322e75ea\") " pod="kube-system/coredns-7db6d8ff4d-79mvm" Apr 30 04:42:27.454249 kubelet[3250]: I0430 04:42:27.453937 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69p9x\" (UniqueName: \"kubernetes.io/projected/9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269-kube-api-access-69p9x\") pod \"coredns-7db6d8ff4d-tx9ch\" (UID: \"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269\") " pod="kube-system/coredns-7db6d8ff4d-tx9ch" Apr 30 04:42:27.454249 kubelet[3250]: I0430 04:42:27.453950 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr47q\" (UniqueName: \"kubernetes.io/projected/1babe518-7873-4e58-95dd-06aadbc220aa-kube-api-access-lr47q\") pod \"calico-kube-controllers-75dd59b6b7-kxd8k\" (UID: \"1babe518-7873-4e58-95dd-06aadbc220aa\") " pod="calico-system/calico-kube-controllers-75dd59b6b7-kxd8k" Apr 30 04:42:27.454249 kubelet[3250]: I0430 04:42:27.453963 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269-config-volume\") pod \"coredns-7db6d8ff4d-tx9ch\" (UID: \"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269\") " pod="kube-system/coredns-7db6d8ff4d-tx9ch" Apr 30 04:42:27.454249 kubelet[3250]: I0430 04:42:27.453980 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f73889e-3589-43d5-a030-03e0590e642a-calico-apiserver-certs\") pod \"calico-apiserver-5868569d6c-fwfcx\" (UID: \"9f73889e-3589-43d5-a030-03e0590e642a\") " pod="calico-apiserver/calico-apiserver-5868569d6c-fwfcx" Apr 30 04:42:27.454249 kubelet[3250]: I0430 04:42:27.454009 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb875b09-c72b-4971-9fb3-a7ce0430d1b5-calico-apiserver-certs\") pod \"calico-apiserver-5868569d6c-plw6m\" (UID: \"eb875b09-c72b-4971-9fb3-a7ce0430d1b5\") " pod="calico-apiserver/calico-apiserver-5868569d6c-plw6m" Apr 30 04:42:27.454375 kubelet[3250]: I0430 04:42:27.454075 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szdxb\" (UniqueName: \"kubernetes.io/projected/9f73889e-3589-43d5-a030-03e0590e642a-kube-api-access-szdxb\") pod \"calico-apiserver-5868569d6c-fwfcx\" (UID: \"9f73889e-3589-43d5-a030-03e0590e642a\") " pod="calico-apiserver/calico-apiserver-5868569d6c-fwfcx" Apr 30 04:42:27.454375 kubelet[3250]: I0430 04:42:27.454101 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d70bcf9-c697-4f69-b9f6-124c322e75ea-config-volume\") pod \"coredns-7db6d8ff4d-79mvm\" (UID: \"4d70bcf9-c697-4f69-b9f6-124c322e75ea\") " pod="kube-system/coredns-7db6d8ff4d-79mvm" Apr 30 04:42:27.454375 kubelet[3250]: I0430 04:42:27.454124 3250 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1babe518-7873-4e58-95dd-06aadbc220aa-tigera-ca-bundle\") pod \"calico-kube-controllers-75dd59b6b7-kxd8k\" (UID: \"1babe518-7873-4e58-95dd-06aadbc220aa\") " pod="calico-system/calico-kube-controllers-75dd59b6b7-kxd8k" Apr 30 04:42:27.714627 containerd[1811]: time="2025-04-30T04:42:27.714536810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79mvm,Uid:4d70bcf9-c697-4f69-b9f6-124c322e75ea,Namespace:kube-system,Attempt:0,}" Apr 30 04:42:27.724799 containerd[1811]: time="2025-04-30T04:42:27.724718239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dd59b6b7-kxd8k,Uid:1babe518-7873-4e58-95dd-06aadbc220aa,Namespace:calico-system,Attempt:0,}" Apr 30 04:42:27.733183 containerd[1811]: time="2025-04-30T04:42:27.733108637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tx9ch,Uid:9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269,Namespace:kube-system,Attempt:0,}" Apr 30 04:42:27.739405 containerd[1811]: time="2025-04-30T04:42:27.739323305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-fwfcx,Uid:9f73889e-3589-43d5-a030-03e0590e642a,Namespace:calico-apiserver,Attempt:0,}" Apr 30 04:42:27.742635 containerd[1811]: time="2025-04-30T04:42:27.742537998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-plw6m,Uid:eb875b09-c72b-4971-9fb3-a7ce0430d1b5,Namespace:calico-apiserver,Attempt:0,}" Apr 30 04:42:27.999754 containerd[1811]: time="2025-04-30T04:42:27.999673330Z" level=info msg="shim disconnected" id=c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e namespace=k8s.io Apr 30 04:42:27.999754 containerd[1811]: time="2025-04-30T04:42:27.999702086Z" level=warning msg="cleaning up after shim disconnected" id=c2ee8211edff4fe536dc1402024e7f125fa8e8a7cb9832dcf37fdee344c3538e namespace=k8s.io Apr 30 04:42:27.999754 containerd[1811]: time="2025-04-30T04:42:27.999708350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 04:42:28.043049 containerd[1811]: time="2025-04-30T04:42:28.043015265Z" level=error msg="Failed to destroy network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043049 containerd[1811]: time="2025-04-30T04:42:28.043045458Z" level=error msg="Failed to destroy network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043245 containerd[1811]: time="2025-04-30T04:42:28.043232512Z" level=error msg="encountered an error cleaning up failed sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043289 containerd[1811]: time="2025-04-30T04:42:28.043243161Z" level=error msg="encountered an error cleaning up failed sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043289 containerd[1811]: time="2025-04-30T04:42:28.043269894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-plw6m,Uid:eb875b09-c72b-4971-9fb3-a7ce0430d1b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043364 containerd[1811]: time="2025-04-30T04:42:28.043276459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-fwfcx,Uid:9f73889e-3589-43d5-a030-03e0590e642a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043461 kubelet[3250]: E0430 04:42:28.043434 3250 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043528 kubelet[3250]: E0430 04:42:28.043491 3250 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5868569d6c-fwfcx" Apr 30 04:42:28.043528 kubelet[3250]: E0430 04:42:28.043513 3250 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5868569d6c-fwfcx" Apr 30 04:42:28.043528 kubelet[3250]: E0430 04:42:28.043434 3250 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.043623 kubelet[3250]: E0430 04:42:28.043541 3250 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5868569d6c-plw6m" Apr 30 04:42:28.043623 kubelet[3250]: E0430 04:42:28.043556 3250 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5868569d6c-plw6m" Apr 30 04:42:28.043623 kubelet[3250]: E0430 04:42:28.043557 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5868569d6c-fwfcx_calico-apiserver(9f73889e-3589-43d5-a030-03e0590e642a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5868569d6c-fwfcx_calico-apiserver(9f73889e-3589-43d5-a030-03e0590e642a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5868569d6c-fwfcx" podUID="9f73889e-3589-43d5-a030-03e0590e642a" Apr 30 04:42:28.043706 kubelet[3250]: E0430 04:42:28.043584 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5868569d6c-plw6m_calico-apiserver(eb875b09-c72b-4971-9fb3-a7ce0430d1b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5868569d6c-plw6m_calico-apiserver(eb875b09-c72b-4971-9fb3-a7ce0430d1b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5868569d6c-plw6m" podUID="eb875b09-c72b-4971-9fb3-a7ce0430d1b5" Apr 30 04:42:28.044711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb-shm.mount: Deactivated successfully. Apr 30 04:42:28.044807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314-shm.mount: Deactivated successfully. Apr 30 04:42:28.050067 containerd[1811]: time="2025-04-30T04:42:28.050041154Z" level=error msg="Failed to destroy network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.050239 containerd[1811]: time="2025-04-30T04:42:28.050226991Z" level=error msg="encountered an error cleaning up failed sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.050300 containerd[1811]: time="2025-04-30T04:42:28.050261368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tx9ch,Uid:9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.050431 kubelet[3250]: E0430 04:42:28.050381 3250 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.050431 kubelet[3250]: E0430 04:42:28.050418 3250 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tx9ch" Apr 30 04:42:28.050431 kubelet[3250]: E0430 04:42:28.050431 3250 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tx9ch" Apr 30 04:42:28.050516 kubelet[3250]: E0430 04:42:28.050458 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tx9ch_kube-system(9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tx9ch_kube-system(9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tx9ch" podUID="9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269" Apr 30 04:42:28.051219 containerd[1811]: time="2025-04-30T04:42:28.051201559Z" level=error msg="Failed to destroy network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051342 containerd[1811]: time="2025-04-30T04:42:28.051297539Z" level=error msg="Failed to destroy network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051380 containerd[1811]: time="2025-04-30T04:42:28.051367974Z" level=error msg="encountered an error cleaning up failed sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051404 containerd[1811]: time="2025-04-30T04:42:28.051391126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dd59b6b7-kxd8k,Uid:1babe518-7873-4e58-95dd-06aadbc220aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051437 containerd[1811]: time="2025-04-30T04:42:28.051425081Z" level=error msg="encountered an error cleaning up failed sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051456 containerd[1811]: time="2025-04-30T04:42:28.051445536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79mvm,Uid:4d70bcf9-c697-4f69-b9f6-124c322e75ea,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051538 kubelet[3250]: E0430 04:42:28.051484 3250 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051538 kubelet[3250]: E0430 04:42:28.051493 3250 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.051538 kubelet[3250]: E0430 04:42:28.051506 3250 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75dd59b6b7-kxd8k" Apr 30 04:42:28.051538 kubelet[3250]: E0430 04:42:28.051512 3250 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-79mvm" Apr 30 04:42:28.051628 kubelet[3250]: E0430 04:42:28.051517 3250 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75dd59b6b7-kxd8k" Apr 30 04:42:28.051628 kubelet[3250]: E0430 04:42:28.051521 3250 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-79mvm" Apr 30 04:42:28.051628 kubelet[3250]: E0430 04:42:28.051534 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75dd59b6b7-kxd8k_calico-system(1babe518-7873-4e58-95dd-06aadbc220aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75dd59b6b7-kxd8k_calico-system(1babe518-7873-4e58-95dd-06aadbc220aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75dd59b6b7-kxd8k" podUID="1babe518-7873-4e58-95dd-06aadbc220aa" Apr 30 04:42:28.051699 kubelet[3250]: E0430 04:42:28.051542 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-79mvm_kube-system(4d70bcf9-c697-4f69-b9f6-124c322e75ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-79mvm_kube-system(4d70bcf9-c697-4f69-b9f6-124c322e75ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-79mvm" podUID="4d70bcf9-c697-4f69-b9f6-124c322e75ea" Apr 30 04:42:28.433827 systemd[1]: Created slice kubepods-besteffort-pod0582c164_a1e7_4b75_a502_4ea70094f195.slice - libcontainer container kubepods-besteffort-pod0582c164_a1e7_4b75_a502_4ea70094f195.slice. Apr 30 04:42:28.439799 containerd[1811]: time="2025-04-30T04:42:28.439686982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d89lc,Uid:0582c164-a1e7-4b75-a502-4ea70094f195,Namespace:calico-system,Attempt:0,}" Apr 30 04:42:28.469484 containerd[1811]: time="2025-04-30T04:42:28.469431461Z" level=error msg="Failed to destroy network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.469638 containerd[1811]: time="2025-04-30T04:42:28.469597792Z" level=error msg="encountered an error cleaning up failed sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.469638 containerd[1811]: time="2025-04-30T04:42:28.469627093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d89lc,Uid:0582c164-a1e7-4b75-a502-4ea70094f195,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.469809 kubelet[3250]: E0430 04:42:28.469760 3250 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.469809 kubelet[3250]: E0430 04:42:28.469794 3250 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d89lc" Apr 30 04:42:28.469809 kubelet[3250]: E0430 04:42:28.469806 3250 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d89lc" Apr 30 04:42:28.470023 kubelet[3250]: E0430 04:42:28.469832 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d89lc_calico-system(0582c164-a1e7-4b75-a502-4ea70094f195)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d89lc_calico-system(0582c164-a1e7-4b75-a502-4ea70094f195)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d89lc" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" Apr 30 04:42:28.510215 kubelet[3250]: I0430 04:42:28.510167 3250 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:28.510668 containerd[1811]: time="2025-04-30T04:42:28.510641296Z" level=info msg="StopPodSandbox for \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\"" Apr 30 04:42:28.510831 containerd[1811]: time="2025-04-30T04:42:28.510811048Z" level=info msg="Ensure that sandbox d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f in task-service has been cleanup successfully" Apr 30 04:42:28.511924 kubelet[3250]: I0430 04:42:28.511905 3250 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:28.512007 containerd[1811]: time="2025-04-30T04:42:28.511937934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 04:42:28.512247 containerd[1811]: time="2025-04-30T04:42:28.512228342Z" level=info msg="StopPodSandbox for \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\"" Apr 30 04:42:28.512358 containerd[1811]: time="2025-04-30T04:42:28.512347639Z" level=info msg="Ensure that sandbox 3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314 in task-service has been cleanup successfully" Apr 30 04:42:28.512435 kubelet[3250]: I0430 04:42:28.512425 3250 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:28.512686 containerd[1811]: time="2025-04-30T04:42:28.512666106Z" level=info msg="StopPodSandbox for \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\"" Apr 30 04:42:28.512796 containerd[1811]: time="2025-04-30T04:42:28.512782327Z" level=info msg="Ensure that sandbox 897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e in task-service has been cleanup successfully" Apr 30 04:42:28.512885 kubelet[3250]: I0430 04:42:28.512875 3250 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:28.513129 containerd[1811]: time="2025-04-30T04:42:28.513117499Z" level=info msg="StopPodSandbox for \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\"" Apr 30 04:42:28.513225 containerd[1811]: time="2025-04-30T04:42:28.513214821Z" level=info msg="Ensure that sandbox 095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b in task-service has been cleanup successfully" Apr 30 04:42:28.513400 kubelet[3250]: I0430 04:42:28.513389 3250 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:28.513725 containerd[1811]: time="2025-04-30T04:42:28.513706692Z" level=info msg="StopPodSandbox for \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\"" Apr 30 04:42:28.513854 containerd[1811]: time="2025-04-30T04:42:28.513838433Z" level=info msg="Ensure that sandbox c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109 in task-service has been cleanup successfully" Apr 30 04:42:28.514077 kubelet[3250]: I0430 04:42:28.514066 3250 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:28.514463 containerd[1811]: time="2025-04-30T04:42:28.514443523Z" level=info msg="StopPodSandbox for \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\"" Apr 30 04:42:28.514576 containerd[1811]: time="2025-04-30T04:42:28.514565647Z" level=info msg="Ensure that sandbox f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb in task-service has been cleanup successfully" Apr 30 04:42:28.528618 containerd[1811]: time="2025-04-30T04:42:28.528582944Z" level=error msg="StopPodSandbox for \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\" failed" error="failed to destroy network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.528770 kubelet[3250]: E0430 04:42:28.528744 3250 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:28.528828 kubelet[3250]: E0430 04:42:28.528788 3250 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f"} Apr 30 04:42:28.528854 kubelet[3250]: E0430 04:42:28.528847 3250 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d70bcf9-c697-4f69-b9f6-124c322e75ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 04:42:28.528902 kubelet[3250]: E0430 04:42:28.528863 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d70bcf9-c697-4f69-b9f6-124c322e75ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-79mvm" podUID="4d70bcf9-c697-4f69-b9f6-124c322e75ea" Apr 30 04:42:28.528941 containerd[1811]: time="2025-04-30T04:42:28.528863713Z" level=error msg="StopPodSandbox for \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\" failed" error="failed to destroy network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.528966 kubelet[3250]: E0430 04:42:28.528934 3250 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:28.528966 kubelet[3250]: E0430 04:42:28.528951 3250 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e"} Apr 30 04:42:28.529010 kubelet[3250]: E0430 04:42:28.528965 3250 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0582c164-a1e7-4b75-a502-4ea70094f195\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 04:42:28.529010 kubelet[3250]: E0430 04:42:28.528975 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0582c164-a1e7-4b75-a502-4ea70094f195\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d89lc" podUID="0582c164-a1e7-4b75-a502-4ea70094f195" Apr 30 04:42:28.529486 containerd[1811]: time="2025-04-30T04:42:28.529438975Z" level=error msg="StopPodSandbox for \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\" failed" error="failed to destroy network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.529595 kubelet[3250]: E0430 04:42:28.529580 3250 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:28.529657 kubelet[3250]: E0430 04:42:28.529598 3250 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b"} Apr 30 04:42:28.529657 kubelet[3250]: E0430 04:42:28.529624 3250 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 04:42:28.529657 kubelet[3250]: E0430 04:42:28.529641 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tx9ch" podUID="9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269" Apr 30 04:42:28.529817 containerd[1811]: time="2025-04-30T04:42:28.529801537Z" level=error msg="StopPodSandbox for \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\" failed" error="failed to destroy network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.529887 kubelet[3250]: E0430 04:42:28.529875 3250 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:28.529910 kubelet[3250]: E0430 04:42:28.529890 3250 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314"} Apr 30 04:42:28.529910 kubelet[3250]: E0430 04:42:28.529906 3250 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb875b09-c72b-4971-9fb3-a7ce0430d1b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 04:42:28.529956 kubelet[3250]: E0430 04:42:28.529916 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb875b09-c72b-4971-9fb3-a7ce0430d1b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5868569d6c-plw6m" podUID="eb875b09-c72b-4971-9fb3-a7ce0430d1b5" Apr 30 04:42:28.530911 containerd[1811]: time="2025-04-30T04:42:28.530896695Z" level=error msg="StopPodSandbox for \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\" failed" error="failed to destroy network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.530973 kubelet[3250]: E0430 04:42:28.530961 3250 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:28.530997 kubelet[3250]: E0430 04:42:28.530977 3250 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109"} Apr 30 04:42:28.530997 kubelet[3250]: E0430 04:42:28.530992 3250 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1babe518-7873-4e58-95dd-06aadbc220aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 04:42:28.531043 kubelet[3250]: E0430 04:42:28.531002 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1babe518-7873-4e58-95dd-06aadbc220aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75dd59b6b7-kxd8k" podUID="1babe518-7873-4e58-95dd-06aadbc220aa" Apr 30 04:42:28.531173 containerd[1811]: time="2025-04-30T04:42:28.531160758Z" level=error msg="StopPodSandbox for \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\" failed" error="failed to destroy network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 04:42:28.531223 kubelet[3250]: E0430 04:42:28.531213 3250 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:28.531246 kubelet[3250]: E0430 04:42:28.531225 3250 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb"} Apr 30 04:42:28.531246 kubelet[3250]: E0430 04:42:28.531237 3250 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f73889e-3589-43d5-a030-03e0590e642a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 04:42:28.531302 kubelet[3250]: E0430 04:42:28.531246 3250 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f73889e-3589-43d5-a030-03e0590e642a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5868569d6c-fwfcx" podUID="9f73889e-3589-43d5-a030-03e0590e642a" Apr 30 04:42:28.758749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b-shm.mount: Deactivated successfully. Apr 30 04:42:28.758987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109-shm.mount: Deactivated successfully. Apr 30 04:42:28.759175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f-shm.mount: Deactivated successfully. Apr 30 04:42:33.887062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876770814.mount: Deactivated successfully. Apr 30 04:42:33.907630 containerd[1811]: time="2025-04-30T04:42:33.907610899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:33.907861 containerd[1811]: time="2025-04-30T04:42:33.907847562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 04:42:33.908086 containerd[1811]: time="2025-04-30T04:42:33.908076553Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:33.909065 containerd[1811]: time="2025-04-30T04:42:33.909052382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:33.909417 containerd[1811]: time="2025-04-30T04:42:33.909405031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 5.397447046s" Apr 30 04:42:33.909446 containerd[1811]: time="2025-04-30T04:42:33.909420098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 04:42:33.912966 containerd[1811]: time="2025-04-30T04:42:33.912925713Z" level=info msg="CreateContainer within sandbox \"b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 04:42:33.918150 containerd[1811]: time="2025-04-30T04:42:33.918136529Z" level=info msg="CreateContainer within sandbox \"b5824985d5e65b050ba189f7187432076bd4d3827f76a8d8cb53babc13646091\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1075be772901d7d91f987c89754d50a3eff8709c4daceec63ffb64f09494f988\"" Apr 30 04:42:33.918461 containerd[1811]: time="2025-04-30T04:42:33.918449618Z" level=info msg="StartContainer for \"1075be772901d7d91f987c89754d50a3eff8709c4daceec63ffb64f09494f988\"" Apr 30 04:42:33.939441 systemd[1]: Started cri-containerd-1075be772901d7d91f987c89754d50a3eff8709c4daceec63ffb64f09494f988.scope - libcontainer container 1075be772901d7d91f987c89754d50a3eff8709c4daceec63ffb64f09494f988. Apr 30 04:42:33.954588 containerd[1811]: time="2025-04-30T04:42:33.954563510Z" level=info msg="StartContainer for \"1075be772901d7d91f987c89754d50a3eff8709c4daceec63ffb64f09494f988\" returns successfully" Apr 30 04:42:34.041183 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 04:42:34.041243 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 04:42:34.549717 kubelet[3250]: I0430 04:42:34.549674 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kw2n8" podStartSLOduration=1.5101637079999999 podStartE2EDuration="16.549652941s" podCreationTimestamp="2025-04-30 04:42:18 +0000 UTC" firstStartedPulling="2025-04-30 04:42:18.870252584 +0000 UTC m=+23.496320897" lastFinishedPulling="2025-04-30 04:42:33.909741815 +0000 UTC m=+38.535810130" observedRunningTime="2025-04-30 04:42:34.549341375 +0000 UTC m=+39.175409693" watchObservedRunningTime="2025-04-30 04:42:34.549652941 +0000 UTC m=+39.175721253" Apr 30 04:42:40.420122 containerd[1811]: time="2025-04-30T04:42:40.420022398Z" level=info msg="StopPodSandbox for \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\"" Apr 30 04:42:40.421163 containerd[1811]: time="2025-04-30T04:42:40.420032087Z" level=info msg="StopPodSandbox for \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\"" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.501 [INFO][5068] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.501 [INFO][5068] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" iface="eth0" netns="/var/run/netns/cni-ca8166df-d159-a13c-4bcc-ca2daaa8330b" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.501 [INFO][5068] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" iface="eth0" netns="/var/run/netns/cni-ca8166df-d159-a13c-4bcc-ca2daaa8330b" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.501 [INFO][5068] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" iface="eth0" netns="/var/run/netns/cni-ca8166df-d159-a13c-4bcc-ca2daaa8330b" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.501 [INFO][5068] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.501 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.523 [INFO][5100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.523 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.524 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.528 [WARNING][5100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.528 [INFO][5100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.529 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:40.532594 containerd[1811]: 2025-04-30 04:42:40.531 [INFO][5068] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:40.533132 containerd[1811]: time="2025-04-30T04:42:40.532692687Z" level=info msg="TearDown network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\" successfully" Apr 30 04:42:40.533132 containerd[1811]: time="2025-04-30T04:42:40.532721783Z" level=info msg="StopPodSandbox for \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\" returns successfully" Apr 30 04:42:40.533205 containerd[1811]: time="2025-04-30T04:42:40.533190153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79mvm,Uid:4d70bcf9-c697-4f69-b9f6-124c322e75ea,Namespace:kube-system,Attempt:1,}" Apr 30 04:42:40.534401 systemd[1]: run-netns-cni\x2dca8166df\x2dd159\x2da13c\x2d4bcc\x2dca2daaa8330b.mount: Deactivated successfully. Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.503 [INFO][5067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.503 [INFO][5067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" iface="eth0" netns="/var/run/netns/cni-3678b513-ab9b-56f9-c6f4-2abe7a03a251" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.503 [INFO][5067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" iface="eth0" netns="/var/run/netns/cni-3678b513-ab9b-56f9-c6f4-2abe7a03a251" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.504 [INFO][5067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" iface="eth0" netns="/var/run/netns/cni-3678b513-ab9b-56f9-c6f4-2abe7a03a251" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.504 [INFO][5067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.504 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.524 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.524 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.529 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.533 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.533 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.534 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:40.536031 containerd[1811]: 2025-04-30 04:42:40.535 [INFO][5067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:40.536350 containerd[1811]: time="2025-04-30T04:42:40.536115945Z" level=info msg="TearDown network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\" successfully" Apr 30 04:42:40.536350 containerd[1811]: time="2025-04-30T04:42:40.536134726Z" level=info msg="StopPodSandbox for \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\" returns successfully" Apr 30 04:42:40.536468 containerd[1811]: time="2025-04-30T04:42:40.536431904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d89lc,Uid:0582c164-a1e7-4b75-a502-4ea70094f195,Namespace:calico-system,Attempt:1,}" Apr 30 04:42:40.537454 systemd[1]: run-netns-cni\x2d3678b513\x2dab9b\x2d56f9\x2dc6f4\x2d2abe7a03a251.mount: Deactivated successfully. Apr 30 04:42:40.592717 systemd-networkd[1607]: cali29335ad8f3a: Link UP Apr 30 04:42:40.592812 systemd-networkd[1607]: cali29335ad8f3a: Gained carrier Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.547 [INFO][5135] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.554 [INFO][5135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0 coredns-7db6d8ff4d- kube-system 4d70bcf9-c697-4f69-b9f6-124c322e75ea 767 0 2025-04-30 04:42:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-671b97f93d coredns-7db6d8ff4d-79mvm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29335ad8f3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.554 [INFO][5135] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.568 [INFO][5180] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" HandleID="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.573 [INFO][5180] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" HandleID="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133b80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-671b97f93d", "pod":"coredns-7db6d8ff4d-79mvm", "timestamp":"2025-04-30 04:42:40.568640072 +0000 UTC"}, Hostname:"ci-4081.3.3-a-671b97f93d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.573 [INFO][5180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.573 [INFO][5180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.573 [INFO][5180] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-671b97f93d' Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.574 [INFO][5180] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.576 [INFO][5180] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.579 [INFO][5180] ipam/ipam.go 489: Trying affinity for 192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.580 [INFO][5180] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.581 [INFO][5180] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.581 [INFO][5180] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.582 [INFO][5180] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.584 [INFO][5180] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.587 [INFO][5180] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.1/26] block=192.168.107.0/26 handle="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.587 [INFO][5180] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.1/26] handle="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.587 [INFO][5180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:40.619570 containerd[1811]: 2025-04-30 04:42:40.587 [INFO][5180] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.1/26] IPv6=[] ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" HandleID="k8s-pod-network.07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.621191 containerd[1811]: 2025-04-30 04:42:40.588 [INFO][5135] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d70bcf9-c697-4f69-b9f6-124c322e75ea", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"", Pod:"coredns-7db6d8ff4d-79mvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29335ad8f3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:40.621191 containerd[1811]: 2025-04-30 04:42:40.588 [INFO][5135] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.1/32] ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.621191 containerd[1811]: 2025-04-30 04:42:40.588 [INFO][5135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29335ad8f3a ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.621191 containerd[1811]: 2025-04-30 04:42:40.592 [INFO][5135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.621191 containerd[1811]: 2025-04-30 04:42:40.592 [INFO][5135] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d70bcf9-c697-4f69-b9f6-124c322e75ea", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae", Pod:"coredns-7db6d8ff4d-79mvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29335ad8f3a", MAC:"ca:a9:49:b9:b2:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:40.621191 containerd[1811]: 2025-04-30 04:42:40.617 [INFO][5135] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-79mvm" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:40.634853 systemd-networkd[1607]: cali7ac410cda5e: Link UP Apr 30 04:42:40.635162 systemd-networkd[1607]: cali7ac410cda5e: Gained carrier Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.550 [INFO][5144] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.555 [INFO][5144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0 csi-node-driver- calico-system 0582c164-a1e7-4b75-a502-4ea70094f195 768 0 2025-04-30 04:42:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-671b97f93d csi-node-driver-d89lc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7ac410cda5e [] []}} ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.555 [INFO][5144] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.568 [INFO][5186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" HandleID="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.573 [INFO][5186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" HandleID="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbb60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-671b97f93d", "pod":"csi-node-driver-d89lc", "timestamp":"2025-04-30 04:42:40.568638296 +0000 UTC"}, Hostname:"ci-4081.3.3-a-671b97f93d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.573 [INFO][5186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.587 [INFO][5186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.587 [INFO][5186] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-671b97f93d' Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.588 [INFO][5186] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.590 [INFO][5186] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.592 [INFO][5186] ipam/ipam.go 489: Trying affinity for 192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.593 [INFO][5186] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.595 [INFO][5186] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.595 [INFO][5186] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.595 [INFO][5186] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.617 [INFO][5186] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.627 [INFO][5186] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.2/26] block=192.168.107.0/26 handle="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.628 [INFO][5186] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.2/26] handle="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.628 [INFO][5186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:40.642732 containerd[1811]: 2025-04-30 04:42:40.628 [INFO][5186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.2/26] IPv6=[] ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" HandleID="k8s-pod-network.51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.643407 containerd[1811]: 2025-04-30 04:42:40.632 [INFO][5144] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0582c164-a1e7-4b75-a502-4ea70094f195", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"", Pod:"csi-node-driver-d89lc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ac410cda5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:40.643407 containerd[1811]: 2025-04-30 04:42:40.633 [INFO][5144] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.2/32] ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.643407 containerd[1811]: 2025-04-30 04:42:40.633 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ac410cda5e ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.643407 containerd[1811]: 2025-04-30 04:42:40.635 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.643407 containerd[1811]: 2025-04-30 04:42:40.635 [INFO][5144] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0582c164-a1e7-4b75-a502-4ea70094f195", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc", Pod:"csi-node-driver-d89lc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ac410cda5e", MAC:"96:5d:e0:a6:7d:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:40.643407 containerd[1811]: 2025-04-30 04:42:40.640 [INFO][5144] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc" Namespace="calico-system" Pod="csi-node-driver-d89lc" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:40.643934 containerd[1811]: time="2025-04-30T04:42:40.643858274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:40.643934 containerd[1811]: time="2025-04-30T04:42:40.643907588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:40.643934 containerd[1811]: time="2025-04-30T04:42:40.643924947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:40.643991 containerd[1811]: time="2025-04-30T04:42:40.643969733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:40.651867 containerd[1811]: time="2025-04-30T04:42:40.651825049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:40.651867 containerd[1811]: time="2025-04-30T04:42:40.651860565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:40.651959 containerd[1811]: time="2025-04-30T04:42:40.651873070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:40.651959 containerd[1811]: time="2025-04-30T04:42:40.651921174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:40.669838 systemd[1]: Started cri-containerd-07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae.scope - libcontainer container 07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae. Apr 30 04:42:40.678091 systemd[1]: Started cri-containerd-51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc.scope - libcontainer container 51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc. Apr 30 04:42:40.722061 containerd[1811]: time="2025-04-30T04:42:40.722015189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d89lc,Uid:0582c164-a1e7-4b75-a502-4ea70094f195,Namespace:calico-system,Attempt:1,} returns sandbox id \"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc\"" Apr 30 04:42:40.723694 containerd[1811]: time="2025-04-30T04:42:40.723664418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 04:42:40.735846 containerd[1811]: time="2025-04-30T04:42:40.735791764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-79mvm,Uid:4d70bcf9-c697-4f69-b9f6-124c322e75ea,Namespace:kube-system,Attempt:1,} returns sandbox id \"07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae\"" Apr 30 04:42:40.736941 containerd[1811]: time="2025-04-30T04:42:40.736928951Z" level=info msg="CreateContainer within sandbox \"07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 04:42:40.741452 containerd[1811]: time="2025-04-30T04:42:40.741410926Z" level=info msg="CreateContainer within sandbox \"07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3eedc015ff8ec74c03c2542c79b9a1ce32ce65809b749d665cb8b65d888eed98\"" Apr 30 04:42:40.741607 containerd[1811]: time="2025-04-30T04:42:40.741595171Z" level=info msg="StartContainer for \"3eedc015ff8ec74c03c2542c79b9a1ce32ce65809b749d665cb8b65d888eed98\"" Apr 30 04:42:40.763768 systemd[1]: Started cri-containerd-3eedc015ff8ec74c03c2542c79b9a1ce32ce65809b749d665cb8b65d888eed98.scope - libcontainer container 3eedc015ff8ec74c03c2542c79b9a1ce32ce65809b749d665cb8b65d888eed98. Apr 30 04:42:40.789426 containerd[1811]: time="2025-04-30T04:42:40.789362663Z" level=info msg="StartContainer for \"3eedc015ff8ec74c03c2542c79b9a1ce32ce65809b749d665cb8b65d888eed98\" returns successfully" Apr 30 04:42:41.419676 containerd[1811]: time="2025-04-30T04:42:41.419632384Z" level=info msg="StopPodSandbox for \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\"" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.462 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.462 [INFO][5418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" iface="eth0" netns="/var/run/netns/cni-26caa5db-ffcf-0eeb-2095-43fa28e2c6b7" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.463 [INFO][5418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" iface="eth0" netns="/var/run/netns/cni-26caa5db-ffcf-0eeb-2095-43fa28e2c6b7" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.463 [INFO][5418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" iface="eth0" netns="/var/run/netns/cni-26caa5db-ffcf-0eeb-2095-43fa28e2c6b7" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.463 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.463 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.486 [INFO][5432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.486 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.486 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.492 [WARNING][5432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.492 [INFO][5432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.494 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:41.496911 containerd[1811]: 2025-04-30 04:42:41.495 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:41.497770 containerd[1811]: time="2025-04-30T04:42:41.497017196Z" level=info msg="TearDown network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\" successfully" Apr 30 04:42:41.497770 containerd[1811]: time="2025-04-30T04:42:41.497047358Z" level=info msg="StopPodSandbox for \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\" returns successfully" Apr 30 04:42:41.497770 containerd[1811]: time="2025-04-30T04:42:41.497716703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-fwfcx,Uid:9f73889e-3589-43d5-a030-03e0590e642a,Namespace:calico-apiserver,Attempt:1,}" Apr 30 04:42:41.535805 systemd[1]: run-netns-cni\x2d26caa5db\x2dffcf\x2d0eeb\x2d2095\x2d43fa28e2c6b7.mount: Deactivated successfully. Apr 30 04:42:41.550763 systemd-networkd[1607]: calic5fd2686ab8: Link UP Apr 30 04:42:41.550894 systemd-networkd[1607]: calic5fd2686ab8: Gained carrier Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.512 [INFO][5447] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.518 [INFO][5447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0 calico-apiserver-5868569d6c- calico-apiserver 9f73889e-3589-43d5-a030-03e0590e642a 781 0 2025-04-30 04:42:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5868569d6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-671b97f93d calico-apiserver-5868569d6c-fwfcx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5fd2686ab8 [] []}} ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.518 [INFO][5447] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.531 [INFO][5470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" HandleID="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.536 [INFO][5470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" HandleID="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-671b97f93d", "pod":"calico-apiserver-5868569d6c-fwfcx", "timestamp":"2025-04-30 04:42:41.531815694 +0000 UTC"}, Hostname:"ci-4081.3.3-a-671b97f93d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.536 [INFO][5470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.536 [INFO][5470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.536 [INFO][5470] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-671b97f93d' Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.537 [INFO][5470] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.539 [INFO][5470] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.541 [INFO][5470] ipam/ipam.go 489: Trying affinity for 192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.542 [INFO][5470] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.543 [INFO][5470] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.543 [INFO][5470] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.544 [INFO][5470] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5 Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.546 [INFO][5470] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.549 [INFO][5470] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.3/26] block=192.168.107.0/26 handle="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.549 [INFO][5470] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.3/26] handle="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.549 [INFO][5470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:41.555882 containerd[1811]: 2025-04-30 04:42:41.549 [INFO][5470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.3/26] IPv6=[] ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" HandleID="k8s-pod-network.fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.556288 containerd[1811]: 2025-04-30 04:42:41.549 [INFO][5447] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f73889e-3589-43d5-a030-03e0590e642a", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"", Pod:"calico-apiserver-5868569d6c-fwfcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5fd2686ab8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:41.556288 containerd[1811]: 2025-04-30 04:42:41.550 [INFO][5447] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.3/32] ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.556288 containerd[1811]: 2025-04-30 04:42:41.550 [INFO][5447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5fd2686ab8 ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.556288 containerd[1811]: 2025-04-30 04:42:41.550 [INFO][5447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.556288 containerd[1811]: 2025-04-30 04:42:41.550 [INFO][5447] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f73889e-3589-43d5-a030-03e0590e642a", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5", Pod:"calico-apiserver-5868569d6c-fwfcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5fd2686ab8", MAC:"16:c3:3b:db:76:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:41.556288 containerd[1811]: 2025-04-30 04:42:41.554 [INFO][5447] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-fwfcx" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:41.559000 kubelet[3250]: I0430 04:42:41.558952 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-79mvm" podStartSLOduration=31.558936486 podStartE2EDuration="31.558936486s" podCreationTimestamp="2025-04-30 04:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 04:42:41.558893906 +0000 UTC m=+46.184962222" watchObservedRunningTime="2025-04-30 04:42:41.558936486 +0000 UTC m=+46.185004799" Apr 30 04:42:41.565939 containerd[1811]: time="2025-04-30T04:42:41.565860683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:41.565939 containerd[1811]: time="2025-04-30T04:42:41.565910917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:41.566174 containerd[1811]: time="2025-04-30T04:42:41.565947160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:41.566246 containerd[1811]: time="2025-04-30T04:42:41.566230750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:41.592526 systemd[1]: Started cri-containerd-fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5.scope - libcontainer container fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5. Apr 30 04:42:41.619473 containerd[1811]: time="2025-04-30T04:42:41.619447282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-fwfcx,Uid:9f73889e-3589-43d5-a030-03e0590e642a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5\"" Apr 30 04:42:41.769721 systemd-networkd[1607]: cali7ac410cda5e: Gained IPv6LL Apr 30 04:42:42.025548 systemd-networkd[1607]: cali29335ad8f3a: Gained IPv6LL Apr 30 04:42:42.418497 containerd[1811]: time="2025-04-30T04:42:42.418435252Z" level=info msg="StopPodSandbox for \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\"" Apr 30 04:42:42.418497 containerd[1811]: time="2025-04-30T04:42:42.418456016Z" level=info msg="StopPodSandbox for \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\"" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" iface="eth0" netns="/var/run/netns/cni-0d96ea54-4e26-f772-ff62-073970a5705b" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" iface="eth0" netns="/var/run/netns/cni-0d96ea54-4e26-f772-ff62-073970a5705b" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5613] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" iface="eth0" netns="/var/run/netns/cni-0d96ea54-4e26-f772-ff62-073970a5705b" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.451 [INFO][5643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.451 [INFO][5643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.451 [INFO][5643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.455 [WARNING][5643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.455 [INFO][5643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.455 [INFO][5643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:42.457185 containerd[1811]: 2025-04-30 04:42:42.456 [INFO][5613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:42.457631 containerd[1811]: time="2025-04-30T04:42:42.457236404Z" level=info msg="TearDown network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\" successfully" Apr 30 04:42:42.457631 containerd[1811]: time="2025-04-30T04:42:42.457262018Z" level=info msg="StopPodSandbox for \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\" returns successfully" Apr 30 04:42:42.457694 containerd[1811]: time="2025-04-30T04:42:42.457651026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tx9ch,Uid:9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269,Namespace:kube-system,Attempt:1,}" Apr 30 04:42:42.458773 systemd[1]: run-netns-cni\x2d0d96ea54\x2d4e26\x2df772\x2dff62\x2d073970a5705b.mount: Deactivated successfully. Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5612] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5612] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" iface="eth0" netns="/var/run/netns/cni-989104bc-7c91-2baa-aed0-d489d5dc4c8b" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5612] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" iface="eth0" netns="/var/run/netns/cni-989104bc-7c91-2baa-aed0-d489d5dc4c8b" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5612] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" iface="eth0" netns="/var/run/netns/cni-989104bc-7c91-2baa-aed0-d489d5dc4c8b" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5612] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.441 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.451 [INFO][5642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.452 [INFO][5642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.456 [INFO][5642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.459 [WARNING][5642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.459 [INFO][5642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.459 [INFO][5642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:42.461105 containerd[1811]: 2025-04-30 04:42:42.460 [INFO][5612] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:42.461343 containerd[1811]: time="2025-04-30T04:42:42.461158691Z" level=info msg="TearDown network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\" successfully" Apr 30 04:42:42.461343 containerd[1811]: time="2025-04-30T04:42:42.461171233Z" level=info msg="StopPodSandbox for \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\" returns successfully" Apr 30 04:42:42.461540 containerd[1811]: time="2025-04-30T04:42:42.461499650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-plw6m,Uid:eb875b09-c72b-4971-9fb3-a7ce0430d1b5,Namespace:calico-apiserver,Attempt:1,}" Apr 30 04:42:42.513005 systemd-networkd[1607]: calib867b4301d2: Link UP Apr 30 04:42:42.513141 systemd-networkd[1607]: calib867b4301d2: Gained carrier Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.471 [INFO][5672] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.478 [INFO][5672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0 coredns-7db6d8ff4d- kube-system 9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269 798 0 2025-04-30 04:42:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-671b97f93d coredns-7db6d8ff4d-tx9ch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib867b4301d2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.479 [INFO][5672] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.492 [INFO][5714] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" HandleID="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.498 [INFO][5714] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" HandleID="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294020), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-671b97f93d", "pod":"coredns-7db6d8ff4d-tx9ch", "timestamp":"2025-04-30 04:42:42.492519994 +0000 UTC"}, Hostname:"ci-4081.3.3-a-671b97f93d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.498 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.498 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.498 [INFO][5714] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-671b97f93d' Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.499 [INFO][5714] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.501 [INFO][5714] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.503 [INFO][5714] ipam/ipam.go 489: Trying affinity for 192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.504 [INFO][5714] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.505 [INFO][5714] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.505 [INFO][5714] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.506 [INFO][5714] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.508 [INFO][5714] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.511 [INFO][5714] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.4/26] block=192.168.107.0/26 handle="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.511 [INFO][5714] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.4/26] handle="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.511 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:42.517838 containerd[1811]: 2025-04-30 04:42:42.511 [INFO][5714] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.4/26] IPv6=[] ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" HandleID="k8s-pod-network.fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.518653 containerd[1811]: 2025-04-30 04:42:42.512 [INFO][5672] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"", Pod:"coredns-7db6d8ff4d-tx9ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib867b4301d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:42.518653 containerd[1811]: 2025-04-30 04:42:42.512 [INFO][5672] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.4/32] ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.518653 containerd[1811]: 2025-04-30 04:42:42.512 [INFO][5672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib867b4301d2 ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.518653 containerd[1811]: 2025-04-30 04:42:42.513 [INFO][5672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.518653 containerd[1811]: 2025-04-30 04:42:42.513 [INFO][5672] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c", Pod:"coredns-7db6d8ff4d-tx9ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib867b4301d2", MAC:"9a:98:9a:51:5d:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:42.518653 containerd[1811]: 2025-04-30 04:42:42.517 [INFO][5672] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tx9ch" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:42.525876 systemd-networkd[1607]: cali8087fe7599b: Link UP Apr 30 04:42:42.525999 systemd-networkd[1607]: cali8087fe7599b: Gained carrier Apr 30 04:42:42.527355 containerd[1811]: time="2025-04-30T04:42:42.527319472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:42.527355 containerd[1811]: time="2025-04-30T04:42:42.527351952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:42.527432 containerd[1811]: time="2025-04-30T04:42:42.527360870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:42.527432 containerd[1811]: time="2025-04-30T04:42:42.527407067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.476 [INFO][5686] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.481 [INFO][5686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0 calico-apiserver-5868569d6c- calico-apiserver eb875b09-c72b-4971-9fb3-a7ce0430d1b5 797 0 2025-04-30 04:42:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5868569d6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-671b97f93d calico-apiserver-5868569d6c-plw6m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8087fe7599b [] []}} ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.481 [INFO][5686] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.495 [INFO][5723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" HandleID="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.499 [INFO][5723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" HandleID="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ca120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-671b97f93d", "pod":"calico-apiserver-5868569d6c-plw6m", "timestamp":"2025-04-30 04:42:42.495048056 +0000 UTC"}, Hostname:"ci-4081.3.3-a-671b97f93d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.499 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.511 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.511 [INFO][5723] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-671b97f93d' Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.512 [INFO][5723] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.514 [INFO][5723] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.516 [INFO][5723] ipam/ipam.go 489: Trying affinity for 192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.517 [INFO][5723] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.518 [INFO][5723] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.518 [INFO][5723] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.519 [INFO][5723] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.521 [INFO][5723] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.524 [INFO][5723] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.5/26] block=192.168.107.0/26 handle="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.524 [INFO][5723] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.5/26] handle="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.524 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:42.531343 containerd[1811]: 2025-04-30 04:42:42.524 [INFO][5723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.5/26] IPv6=[] ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" HandleID="k8s-pod-network.91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.531756 containerd[1811]: 2025-04-30 04:42:42.525 [INFO][5686] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb875b09-c72b-4971-9fb3-a7ce0430d1b5", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"", Pod:"calico-apiserver-5868569d6c-plw6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8087fe7599b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:42.531756 containerd[1811]: 2025-04-30 04:42:42.525 [INFO][5686] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.5/32] ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.531756 containerd[1811]: 2025-04-30 04:42:42.525 [INFO][5686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8087fe7599b ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.531756 containerd[1811]: 2025-04-30 04:42:42.526 [INFO][5686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.531756 containerd[1811]: 2025-04-30 04:42:42.526 [INFO][5686] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb875b09-c72b-4971-9fb3-a7ce0430d1b5", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc", Pod:"calico-apiserver-5868569d6c-plw6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8087fe7599b", MAC:"ba:49:c8:89:14:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:42.531756 containerd[1811]: 2025-04-30 04:42:42.530 [INFO][5686] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc" Namespace="calico-apiserver" Pod="calico-apiserver-5868569d6c-plw6m" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:42.535819 systemd[1]: run-netns-cni\x2d989104bc\x2d7c91\x2d2baa\x2daed0\x2dd489d5dc4c8b.mount: Deactivated successfully. Apr 30 04:42:42.540421 containerd[1811]: time="2025-04-30T04:42:42.540379154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:42.540421 containerd[1811]: time="2025-04-30T04:42:42.540408880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:42.540421 containerd[1811]: time="2025-04-30T04:42:42.540416070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:42.540542 containerd[1811]: time="2025-04-30T04:42:42.540472090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:42.549553 systemd[1]: Started cri-containerd-fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c.scope - libcontainer container fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c. Apr 30 04:42:42.553543 systemd[1]: Started cri-containerd-91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc.scope - libcontainer container 91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc. Apr 30 04:42:42.573082 containerd[1811]: time="2025-04-30T04:42:42.573040473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tx9ch,Uid:9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c\"" Apr 30 04:42:42.574314 containerd[1811]: time="2025-04-30T04:42:42.574263323Z" level=info msg="CreateContainer within sandbox \"fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 04:42:42.576575 containerd[1811]: time="2025-04-30T04:42:42.576557840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868569d6c-plw6m,Uid:eb875b09-c72b-4971-9fb3-a7ce0430d1b5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc\"" Apr 30 04:42:42.586581 containerd[1811]: time="2025-04-30T04:42:42.586555084Z" level=info msg="CreateContainer within sandbox \"fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4cc67d292cea058fae83fd798153a33517c5d0a408db85e8eb2c350cba120bf6\"" Apr 30 04:42:42.586824 containerd[1811]: time="2025-04-30T04:42:42.586810960Z" level=info msg="StartContainer for \"4cc67d292cea058fae83fd798153a33517c5d0a408db85e8eb2c350cba120bf6\"" Apr 30 04:42:42.602312 systemd-networkd[1607]: calic5fd2686ab8: Gained IPv6LL Apr 30 04:42:42.609378 systemd[1]: Started cri-containerd-4cc67d292cea058fae83fd798153a33517c5d0a408db85e8eb2c350cba120bf6.scope - libcontainer container 4cc67d292cea058fae83fd798153a33517c5d0a408db85e8eb2c350cba120bf6. Apr 30 04:42:42.634962 containerd[1811]: time="2025-04-30T04:42:42.634931339Z" level=info msg="StartContainer for \"4cc67d292cea058fae83fd798153a33517c5d0a408db85e8eb2c350cba120bf6\" returns successfully" Apr 30 04:42:42.636827 containerd[1811]: time="2025-04-30T04:42:42.636811490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:42.637025 containerd[1811]: time="2025-04-30T04:42:42.637004801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 04:42:42.637457 containerd[1811]: time="2025-04-30T04:42:42.637441766Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:42.638466 containerd[1811]: time="2025-04-30T04:42:42.638451925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:42.638886 containerd[1811]: time="2025-04-30T04:42:42.638873816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.915178974s" Apr 30 04:42:42.638911 containerd[1811]: time="2025-04-30T04:42:42.638889238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 04:42:42.639377 containerd[1811]: time="2025-04-30T04:42:42.639365903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 04:42:42.639850 containerd[1811]: time="2025-04-30T04:42:42.639837828Z" level=info msg="CreateContainer within sandbox \"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 04:42:42.644864 containerd[1811]: time="2025-04-30T04:42:42.644815311Z" level=info msg="CreateContainer within sandbox \"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"70af88429ccab69eca21c63c33dedc5aa08e6d2f7f3b94ae5775477dd47cc92d\"" Apr 30 04:42:42.645071 containerd[1811]: time="2025-04-30T04:42:42.645026073Z" level=info msg="StartContainer for \"70af88429ccab69eca21c63c33dedc5aa08e6d2f7f3b94ae5775477dd47cc92d\"" Apr 30 04:42:42.676416 systemd[1]: Started cri-containerd-70af88429ccab69eca21c63c33dedc5aa08e6d2f7f3b94ae5775477dd47cc92d.scope - libcontainer container 70af88429ccab69eca21c63c33dedc5aa08e6d2f7f3b94ae5775477dd47cc92d. Apr 30 04:42:42.693863 containerd[1811]: time="2025-04-30T04:42:42.693835549Z" level=info msg="StartContainer for \"70af88429ccab69eca21c63c33dedc5aa08e6d2f7f3b94ae5775477dd47cc92d\" returns successfully" Apr 30 04:42:43.419294 containerd[1811]: time="2025-04-30T04:42:43.419268473Z" level=info msg="StopPodSandbox for \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\"" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.445 [INFO][5983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.445 [INFO][5983] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" iface="eth0" netns="/var/run/netns/cni-f2839665-999c-c0ee-0122-2644203a1d34" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.445 [INFO][5983] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" iface="eth0" netns="/var/run/netns/cni-f2839665-999c-c0ee-0122-2644203a1d34" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.446 [INFO][5983] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" iface="eth0" netns="/var/run/netns/cni-f2839665-999c-c0ee-0122-2644203a1d34" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.446 [INFO][5983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.446 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.455 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.456 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.456 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.459 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.459 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.459 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:43.461151 containerd[1811]: 2025-04-30 04:42:43.460 [INFO][5983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:43.461485 containerd[1811]: time="2025-04-30T04:42:43.461250935Z" level=info msg="TearDown network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\" successfully" Apr 30 04:42:43.461485 containerd[1811]: time="2025-04-30T04:42:43.461280492Z" level=info msg="StopPodSandbox for \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\" returns successfully" Apr 30 04:42:43.461679 containerd[1811]: time="2025-04-30T04:42:43.461666270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dd59b6b7-kxd8k,Uid:1babe518-7873-4e58-95dd-06aadbc220aa,Namespace:calico-system,Attempt:1,}" Apr 30 04:42:43.518777 systemd-networkd[1607]: cali421bb9158b7: Link UP Apr 30 04:42:43.518897 systemd-networkd[1607]: cali421bb9158b7: Gained carrier Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.476 [INFO][6036] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.481 [INFO][6036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0 calico-kube-controllers-75dd59b6b7- calico-system 1babe518-7873-4e58-95dd-06aadbc220aa 817 0 2025-04-30 04:42:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75dd59b6b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-671b97f93d calico-kube-controllers-75dd59b6b7-kxd8k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali421bb9158b7 [] []}} ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.481 [INFO][6036] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.496 [INFO][6061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" HandleID="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.501 [INFO][6061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" HandleID="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385f30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-671b97f93d", "pod":"calico-kube-controllers-75dd59b6b7-kxd8k", "timestamp":"2025-04-30 04:42:43.49620893 +0000 UTC"}, Hostname:"ci-4081.3.3-a-671b97f93d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.501 [INFO][6061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.501 [INFO][6061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.501 [INFO][6061] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-671b97f93d' Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.502 [INFO][6061] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.505 [INFO][6061] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.508 [INFO][6061] ipam/ipam.go 489: Trying affinity for 192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.509 [INFO][6061] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.510 [INFO][6061] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.0/26 host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.510 [INFO][6061] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.0/26 handle="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.511 [INFO][6061] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.514 [INFO][6061] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.0/26 handle="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.517 [INFO][6061] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.6/26] block=192.168.107.0/26 handle="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.517 [INFO][6061] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.6/26] handle="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" host="ci-4081.3.3-a-671b97f93d" Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.517 [INFO][6061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:43.525492 containerd[1811]: 2025-04-30 04:42:43.517 [INFO][6061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.6/26] IPv6=[] ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" HandleID="k8s-pod-network.e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.526028 containerd[1811]: 2025-04-30 04:42:43.518 [INFO][6036] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0", GenerateName:"calico-kube-controllers-75dd59b6b7-", Namespace:"calico-system", SelfLink:"", UID:"1babe518-7873-4e58-95dd-06aadbc220aa", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dd59b6b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"", Pod:"calico-kube-controllers-75dd59b6b7-kxd8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali421bb9158b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:43.526028 containerd[1811]: 2025-04-30 04:42:43.518 [INFO][6036] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.6/32] ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.526028 containerd[1811]: 2025-04-30 04:42:43.518 [INFO][6036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali421bb9158b7 ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.526028 containerd[1811]: 2025-04-30 04:42:43.518 [INFO][6036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.526028 containerd[1811]: 2025-04-30 04:42:43.519 [INFO][6036] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0", GenerateName:"calico-kube-controllers-75dd59b6b7-", Namespace:"calico-system", SelfLink:"", UID:"1babe518-7873-4e58-95dd-06aadbc220aa", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dd59b6b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a", Pod:"calico-kube-controllers-75dd59b6b7-kxd8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali421bb9158b7", MAC:"76:bf:ba:fb:79:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:43.526028 containerd[1811]: 2025-04-30 04:42:43.524 [INFO][6036] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a" Namespace="calico-system" Pod="calico-kube-controllers-75dd59b6b7-kxd8k" WorkloadEndpoint="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:43.534542 containerd[1811]: time="2025-04-30T04:42:43.534327655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 04:42:43.534542 containerd[1811]: time="2025-04-30T04:42:43.534525353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 04:42:43.534542 containerd[1811]: time="2025-04-30T04:42:43.534532587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:43.534655 containerd[1811]: time="2025-04-30T04:42:43.534570770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 04:42:43.535857 systemd[1]: run-netns-cni\x2df2839665\x2d999c\x2dc0ee\x2d0122\x2d2644203a1d34.mount: Deactivated successfully. Apr 30 04:42:43.558513 systemd[1]: Started cri-containerd-e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a.scope - libcontainer container e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a. Apr 30 04:42:43.569232 kubelet[3250]: I0430 04:42:43.569186 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tx9ch" podStartSLOduration=33.56917166 podStartE2EDuration="33.56917166s" podCreationTimestamp="2025-04-30 04:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 04:42:43.568975845 +0000 UTC m=+48.195044175" watchObservedRunningTime="2025-04-30 04:42:43.56917166 +0000 UTC m=+48.195239978" Apr 30 04:42:43.593790 containerd[1811]: time="2025-04-30T04:42:43.593760739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dd59b6b7-kxd8k,Uid:1babe518-7873-4e58-95dd-06aadbc220aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a\"" Apr 30 04:42:43.817596 systemd-networkd[1607]: calib867b4301d2: Gained IPv6LL Apr 30 04:42:44.265451 systemd-networkd[1607]: cali8087fe7599b: Gained IPv6LL Apr 30 04:42:44.650440 systemd-networkd[1607]: cali421bb9158b7: Gained IPv6LL Apr 30 04:42:45.206809 containerd[1811]: time="2025-04-30T04:42:45.206739154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:45.207007 containerd[1811]: time="2025-04-30T04:42:45.206985510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 04:42:45.207382 containerd[1811]: time="2025-04-30T04:42:45.207341311Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:45.208320 containerd[1811]: time="2025-04-30T04:42:45.208303217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:45.208770 containerd[1811]: time="2025-04-30T04:42:45.208728101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.569346277s" Apr 30 04:42:45.208770 containerd[1811]: time="2025-04-30T04:42:45.208745351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 04:42:45.209262 containerd[1811]: time="2025-04-30T04:42:45.209217225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 04:42:45.209857 containerd[1811]: time="2025-04-30T04:42:45.209843608Z" level=info msg="CreateContainer within sandbox \"fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 04:42:45.213534 containerd[1811]: time="2025-04-30T04:42:45.213491361Z" level=info msg="CreateContainer within sandbox \"fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c2c509574d783032d5628c1ed2cbd6f1bfa1270728b9e0237f0a9e5b1f439a5\"" Apr 30 04:42:45.213706 containerd[1811]: time="2025-04-30T04:42:45.213654315Z" level=info msg="StartContainer for \"2c2c509574d783032d5628c1ed2cbd6f1bfa1270728b9e0237f0a9e5b1f439a5\"" Apr 30 04:42:45.235575 systemd[1]: Started cri-containerd-2c2c509574d783032d5628c1ed2cbd6f1bfa1270728b9e0237f0a9e5b1f439a5.scope - libcontainer container 2c2c509574d783032d5628c1ed2cbd6f1bfa1270728b9e0237f0a9e5b1f439a5. Apr 30 04:42:45.257873 containerd[1811]: time="2025-04-30T04:42:45.257851364Z" level=info msg="StartContainer for \"2c2c509574d783032d5628c1ed2cbd6f1bfa1270728b9e0237f0a9e5b1f439a5\" returns successfully" Apr 30 04:42:45.620714 containerd[1811]: time="2025-04-30T04:42:45.620620871Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:45.620891 containerd[1811]: time="2025-04-30T04:42:45.620842893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 04:42:45.622314 containerd[1811]: time="2025-04-30T04:42:45.622297112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 413.065024ms" Apr 30 04:42:45.622349 containerd[1811]: time="2025-04-30T04:42:45.622314762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 04:42:45.622780 containerd[1811]: time="2025-04-30T04:42:45.622768092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 04:42:45.623313 containerd[1811]: time="2025-04-30T04:42:45.623300190Z" level=info msg="CreateContainer within sandbox \"91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 04:42:45.627338 containerd[1811]: time="2025-04-30T04:42:45.627272006Z" level=info msg="CreateContainer within sandbox \"91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"38181ab97af656b857450777c05b1c2544a8dd4a327c46b1576456cee3234b80\"" Apr 30 04:42:45.627513 containerd[1811]: time="2025-04-30T04:42:45.627500615Z" level=info msg="StartContainer for \"38181ab97af656b857450777c05b1c2544a8dd4a327c46b1576456cee3234b80\"" Apr 30 04:42:45.650390 systemd[1]: Started cri-containerd-38181ab97af656b857450777c05b1c2544a8dd4a327c46b1576456cee3234b80.scope - libcontainer container 38181ab97af656b857450777c05b1c2544a8dd4a327c46b1576456cee3234b80. Apr 30 04:42:45.673889 containerd[1811]: time="2025-04-30T04:42:45.673833733Z" level=info msg="StartContainer for \"38181ab97af656b857450777c05b1c2544a8dd4a327c46b1576456cee3234b80\" returns successfully" Apr 30 04:42:46.591108 kubelet[3250]: I0430 04:42:46.591059 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5868569d6c-fwfcx" podStartSLOduration=25.001978984 podStartE2EDuration="28.591043709s" podCreationTimestamp="2025-04-30 04:42:18 +0000 UTC" firstStartedPulling="2025-04-30 04:42:41.62009643 +0000 UTC m=+46.246164748" lastFinishedPulling="2025-04-30 04:42:45.209161158 +0000 UTC m=+49.835229473" observedRunningTime="2025-04-30 04:42:45.592829692 +0000 UTC m=+50.218898068" watchObservedRunningTime="2025-04-30 04:42:46.591043709 +0000 UTC m=+51.217112033" Apr 30 04:42:46.591547 kubelet[3250]: I0430 04:42:46.591145 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5868569d6c-plw6m" podStartSLOduration=25.54554003 podStartE2EDuration="28.59114003s" podCreationTimestamp="2025-04-30 04:42:18 +0000 UTC" firstStartedPulling="2025-04-30 04:42:42.577074502 +0000 UTC m=+47.203142817" lastFinishedPulling="2025-04-30 04:42:45.622674502 +0000 UTC m=+50.248742817" observedRunningTime="2025-04-30 04:42:46.590864705 +0000 UTC m=+51.216933035" watchObservedRunningTime="2025-04-30 04:42:46.59114003 +0000 UTC m=+51.217208352" Apr 30 04:42:46.739713 kubelet[3250]: I0430 04:42:46.739596 3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 04:42:47.447441 containerd[1811]: time="2025-04-30T04:42:47.447416057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:47.447714 containerd[1811]: time="2025-04-30T04:42:47.447652012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 04:42:47.447926 containerd[1811]: time="2025-04-30T04:42:47.447913216Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:47.449022 containerd[1811]: time="2025-04-30T04:42:47.449005546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:47.449459 containerd[1811]: time="2025-04-30T04:42:47.449442984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.826655495s" Apr 30 04:42:47.449494 containerd[1811]: time="2025-04-30T04:42:47.449462214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 04:42:47.449948 containerd[1811]: time="2025-04-30T04:42:47.449936003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 04:42:47.450485 containerd[1811]: time="2025-04-30T04:42:47.450472219Z" level=info msg="CreateContainer within sandbox \"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 04:42:47.455168 containerd[1811]: time="2025-04-30T04:42:47.455126030Z" level=info msg="CreateContainer within sandbox \"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dbfc176b70bfbc22e2d27dbdb0aa75521a7b6c8bf1312ff4f91e4bc8eca82045\"" Apr 30 04:42:47.455377 containerd[1811]: time="2025-04-30T04:42:47.455325986Z" level=info msg="StartContainer for \"dbfc176b70bfbc22e2d27dbdb0aa75521a7b6c8bf1312ff4f91e4bc8eca82045\"" Apr 30 04:42:47.486531 systemd[1]: Started cri-containerd-dbfc176b70bfbc22e2d27dbdb0aa75521a7b6c8bf1312ff4f91e4bc8eca82045.scope - libcontainer container dbfc176b70bfbc22e2d27dbdb0aa75521a7b6c8bf1312ff4f91e4bc8eca82045. Apr 30 04:42:47.516149 containerd[1811]: time="2025-04-30T04:42:47.516100590Z" level=info msg="StartContainer for \"dbfc176b70bfbc22e2d27dbdb0aa75521a7b6c8bf1312ff4f91e4bc8eca82045\" returns successfully" Apr 30 04:42:47.579285 kernel: bpftool[6468]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 04:42:47.594386 kubelet[3250]: I0430 04:42:47.594344 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d89lc" podStartSLOduration=22.867845874 podStartE2EDuration="29.594327128s" podCreationTimestamp="2025-04-30 04:42:18 +0000 UTC" firstStartedPulling="2025-04-30 04:42:40.723402759 +0000 UTC m=+45.349471097" lastFinishedPulling="2025-04-30 04:42:47.449884036 +0000 UTC m=+52.075952351" observedRunningTime="2025-04-30 04:42:47.59376403 +0000 UTC m=+52.219832358" watchObservedRunningTime="2025-04-30 04:42:47.594327128 +0000 UTC m=+52.220395448" Apr 30 04:42:47.742453 systemd-networkd[1607]: vxlan.calico: Link UP Apr 30 04:42:47.742461 systemd-networkd[1607]: vxlan.calico: Gained carrier Apr 30 04:42:48.464939 kubelet[3250]: I0430 04:42:48.464873 3250 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 04:42:48.465328 kubelet[3250]: I0430 04:42:48.464972 3250 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 04:42:49.321492 systemd-networkd[1607]: vxlan.calico: Gained IPv6LL Apr 30 04:42:49.810362 containerd[1811]: time="2025-04-30T04:42:49.810309295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:49.810587 containerd[1811]: time="2025-04-30T04:42:49.810575733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 04:42:49.810980 containerd[1811]: time="2025-04-30T04:42:49.810940501Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:49.811957 containerd[1811]: time="2025-04-30T04:42:49.811941703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 04:42:49.812444 containerd[1811]: time="2025-04-30T04:42:49.812422710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.362470729s" Apr 30 04:42:49.812444 containerd[1811]: time="2025-04-30T04:42:49.812438589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 04:42:49.815904 containerd[1811]: time="2025-04-30T04:42:49.815885628Z" level=info msg="CreateContainer within sandbox \"e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 04:42:49.820156 containerd[1811]: time="2025-04-30T04:42:49.820117712Z" level=info msg="CreateContainer within sandbox \"e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"59bb8148aa0bf09184cf7f09d08959f4e92c7116db2a9d3683283651138e61cd\"" Apr 30 04:42:49.820324 containerd[1811]: time="2025-04-30T04:42:49.820290156Z" level=info msg="StartContainer for \"59bb8148aa0bf09184cf7f09d08959f4e92c7116db2a9d3683283651138e61cd\"" Apr 30 04:42:49.846445 systemd[1]: Started cri-containerd-59bb8148aa0bf09184cf7f09d08959f4e92c7116db2a9d3683283651138e61cd.scope - libcontainer container 59bb8148aa0bf09184cf7f09d08959f4e92c7116db2a9d3683283651138e61cd. Apr 30 04:42:49.871073 containerd[1811]: time="2025-04-30T04:42:49.871052975Z" level=info msg="StartContainer for \"59bb8148aa0bf09184cf7f09d08959f4e92c7116db2a9d3683283651138e61cd\" returns successfully" Apr 30 04:42:50.677076 kubelet[3250]: I0430 04:42:50.677030 3250 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75dd59b6b7-kxd8k" podStartSLOduration=26.458691806 podStartE2EDuration="32.677014728s" podCreationTimestamp="2025-04-30 04:42:18 +0000 UTC" firstStartedPulling="2025-04-30 04:42:43.594529559 +0000 UTC m=+48.220597882" lastFinishedPulling="2025-04-30 04:42:49.812852488 +0000 UTC m=+54.438920804" observedRunningTime="2025-04-30 04:42:50.636869972 +0000 UTC m=+55.262938305" watchObservedRunningTime="2025-04-30 04:42:50.677014728 +0000 UTC m=+55.303083312" Apr 30 04:42:55.417445 containerd[1811]: time="2025-04-30T04:42:55.417424389Z" level=info msg="StopPodSandbox for \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\"" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.437 [WARNING][6728] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0582c164-a1e7-4b75-a502-4ea70094f195", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc", Pod:"csi-node-driver-d89lc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ac410cda5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.437 [INFO][6728] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.437 [INFO][6728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" iface="eth0" netns="" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.437 [INFO][6728] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.437 [INFO][6728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.448 [INFO][6742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.448 [INFO][6742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.448 [INFO][6742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.452 [WARNING][6742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.452 [INFO][6742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.453 [INFO][6742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.454758 containerd[1811]: 2025-04-30 04:42:55.454 [INFO][6728] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.454758 containerd[1811]: time="2025-04-30T04:42:55.454726874Z" level=info msg="TearDown network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\" successfully" Apr 30 04:42:55.454758 containerd[1811]: time="2025-04-30T04:42:55.454740267Z" level=info msg="StopPodSandbox for \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\" returns successfully" Apr 30 04:42:55.455126 containerd[1811]: time="2025-04-30T04:42:55.454996574Z" level=info msg="RemovePodSandbox for \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\"" Apr 30 04:42:55.455126 containerd[1811]: time="2025-04-30T04:42:55.455013234Z" level=info msg="Forcibly stopping sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\"" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.473 [WARNING][6768] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0582c164-a1e7-4b75-a502-4ea70094f195", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"51c40cd63b0612020ad7ea4b0e62121e91f4d3e355319bcbcb55c68c2029d4dc", Pod:"csi-node-driver-d89lc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ac410cda5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.473 [INFO][6768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.473 [INFO][6768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" iface="eth0" netns="" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.473 [INFO][6768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.473 [INFO][6768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.484 [INFO][6781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.484 [INFO][6781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.484 [INFO][6781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.488 [WARNING][6781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.488 [INFO][6781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" HandleID="k8s-pod-network.897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Workload="ci--4081.3.3--a--671b97f93d-k8s-csi--node--driver--d89lc-eth0" Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.489 [INFO][6781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.491134 containerd[1811]: 2025-04-30 04:42:55.490 [INFO][6768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e" Apr 30 04:42:55.491134 containerd[1811]: time="2025-04-30T04:42:55.491124572Z" level=info msg="TearDown network for sandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\" successfully" Apr 30 04:42:55.492567 containerd[1811]: time="2025-04-30T04:42:55.492526855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 04:42:55.492567 containerd[1811]: time="2025-04-30T04:42:55.492564981Z" level=info msg="RemovePodSandbox \"897931dacabfe34eaefc6d525a811520f55f414257478d312e2c0bab1daad71e\" returns successfully" Apr 30 04:42:55.492901 containerd[1811]: time="2025-04-30T04:42:55.492867786Z" level=info msg="StopPodSandbox for \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\"" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.512 [WARNING][6811] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d70bcf9-c697-4f69-b9f6-124c322e75ea", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae", Pod:"coredns-7db6d8ff4d-79mvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29335ad8f3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.512 [INFO][6811] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.512 [INFO][6811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" iface="eth0" netns="" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.512 [INFO][6811] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.512 [INFO][6811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.523 [INFO][6825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.523 [INFO][6825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.523 [INFO][6825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.527 [WARNING][6825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.527 [INFO][6825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.528 [INFO][6825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.529702 containerd[1811]: 2025-04-30 04:42:55.529 [INFO][6811] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.530005 containerd[1811]: time="2025-04-30T04:42:55.529702464Z" level=info msg="TearDown network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\" successfully" Apr 30 04:42:55.530005 containerd[1811]: time="2025-04-30T04:42:55.529717973Z" level=info msg="StopPodSandbox for \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\" returns successfully" Apr 30 04:42:55.530005 containerd[1811]: time="2025-04-30T04:42:55.529967978Z" level=info msg="RemovePodSandbox for \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\"" Apr 30 04:42:55.530005 containerd[1811]: time="2025-04-30T04:42:55.529984112Z" level=info msg="Forcibly stopping sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\"" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.549 [WARNING][6851] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d70bcf9-c697-4f69-b9f6-124c322e75ea", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"07e2b650ec36ffe471196decc3796686906155fac0db7790527a42765d3b55ae", Pod:"coredns-7db6d8ff4d-79mvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29335ad8f3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.549 [INFO][6851] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.549 [INFO][6851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" iface="eth0" netns="" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.549 [INFO][6851] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.549 [INFO][6851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.561 [INFO][6866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.561 [INFO][6866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.561 [INFO][6866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.565 [WARNING][6866] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.565 [INFO][6866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" HandleID="k8s-pod-network.d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--79mvm-eth0" Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.566 [INFO][6866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.567908 containerd[1811]: 2025-04-30 04:42:55.567 [INFO][6851] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f" Apr 30 04:42:55.568228 containerd[1811]: time="2025-04-30T04:42:55.567931094Z" level=info msg="TearDown network for sandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\" successfully" Apr 30 04:42:55.575123 containerd[1811]: time="2025-04-30T04:42:55.575105576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 04:42:55.575169 containerd[1811]: time="2025-04-30T04:42:55.575136571Z" level=info msg="RemovePodSandbox \"d3481dd14da5de1e176071028fd609be701ad3be5b6d7fd05a5ac37ae9daa32f\" returns successfully" Apr 30 04:42:55.575330 containerd[1811]: time="2025-04-30T04:42:55.575292947Z" level=info msg="StopPodSandbox for \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\"" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.593 [WARNING][6899] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb875b09-c72b-4971-9fb3-a7ce0430d1b5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc", Pod:"calico-apiserver-5868569d6c-plw6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8087fe7599b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.593 [INFO][6899] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.593 [INFO][6899] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" iface="eth0" netns="" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.593 [INFO][6899] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.593 [INFO][6899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.603 [INFO][6915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.603 [INFO][6915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.604 [INFO][6915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.607 [WARNING][6915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.607 [INFO][6915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.608 [INFO][6915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.609845 containerd[1811]: 2025-04-30 04:42:55.609 [INFO][6899] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.610156 containerd[1811]: time="2025-04-30T04:42:55.609846807Z" level=info msg="TearDown network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\" successfully" Apr 30 04:42:55.610156 containerd[1811]: time="2025-04-30T04:42:55.609862494Z" level=info msg="StopPodSandbox for \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\" returns successfully" Apr 30 04:42:55.610156 containerd[1811]: time="2025-04-30T04:42:55.610139534Z" level=info msg="RemovePodSandbox for \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\"" Apr 30 04:42:55.610156 containerd[1811]: time="2025-04-30T04:42:55.610154600Z" level=info msg="Forcibly stopping sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\"" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.628 [WARNING][6940] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb875b09-c72b-4971-9fb3-a7ce0430d1b5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"91319d0c4dc1de18a21120cd034a89ab5434f1808f96c2abffc6505690f4e0bc", Pod:"calico-apiserver-5868569d6c-plw6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8087fe7599b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.628 [INFO][6940] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.628 [INFO][6940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" iface="eth0" netns="" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.628 [INFO][6940] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.628 [INFO][6940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.641 [INFO][6953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.641 [INFO][6953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.641 [INFO][6953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.645 [WARNING][6953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.645 [INFO][6953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" HandleID="k8s-pod-network.3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--plw6m-eth0" Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.646 [INFO][6953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.648253 containerd[1811]: 2025-04-30 04:42:55.647 [INFO][6940] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314" Apr 30 04:42:55.648603 containerd[1811]: time="2025-04-30T04:42:55.648284355Z" level=info msg="TearDown network for sandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\" successfully" Apr 30 04:42:55.649640 containerd[1811]: time="2025-04-30T04:42:55.649628150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 04:42:55.649668 containerd[1811]: time="2025-04-30T04:42:55.649654916Z" level=info msg="RemovePodSandbox \"3f825dac6907cda7125b809d4ead3f0179795bba5d3a92dacb418a591e9e2314\" returns successfully" Apr 30 04:42:55.649925 containerd[1811]: time="2025-04-30T04:42:55.649915239Z" level=info msg="StopPodSandbox for \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\"" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.668 [WARNING][6981] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0", GenerateName:"calico-kube-controllers-75dd59b6b7-", Namespace:"calico-system", SelfLink:"", UID:"1babe518-7873-4e58-95dd-06aadbc220aa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dd59b6b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a", Pod:"calico-kube-controllers-75dd59b6b7-kxd8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali421bb9158b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.668 [INFO][6981] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.668 [INFO][6981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" iface="eth0" netns="" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.668 [INFO][6981] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.668 [INFO][6981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.678 [INFO][6996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.679 [INFO][6996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.679 [INFO][6996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.682 [WARNING][6996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.682 [INFO][6996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.683 [INFO][6996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.685254 containerd[1811]: 2025-04-30 04:42:55.684 [INFO][6981] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.685567 containerd[1811]: time="2025-04-30T04:42:55.685289587Z" level=info msg="TearDown network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\" successfully" Apr 30 04:42:55.685567 containerd[1811]: time="2025-04-30T04:42:55.685306778Z" level=info msg="StopPodSandbox for \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\" returns successfully" Apr 30 04:42:55.685605 containerd[1811]: time="2025-04-30T04:42:55.685585116Z" level=info msg="RemovePodSandbox for \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\"" Apr 30 04:42:55.685605 containerd[1811]: time="2025-04-30T04:42:55.685601588Z" level=info msg="Forcibly stopping sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\"" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.706 [WARNING][7025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0", GenerateName:"calico-kube-controllers-75dd59b6b7-", Namespace:"calico-system", SelfLink:"", UID:"1babe518-7873-4e58-95dd-06aadbc220aa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dd59b6b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"e5d0b581506109419303629c1946c6fd632dfd8e8f4cb9a960a90a4c6a66f43a", Pod:"calico-kube-controllers-75dd59b6b7-kxd8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali421bb9158b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.707 [INFO][7025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.707 [INFO][7025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" iface="eth0" netns="" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.707 [INFO][7025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.707 [INFO][7025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.719 [INFO][7040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.719 [INFO][7040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.719 [INFO][7040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.723 [WARNING][7040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.724 [INFO][7040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" HandleID="k8s-pod-network.c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--kube--controllers--75dd59b6b7--kxd8k-eth0" Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.725 [INFO][7040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.726635 containerd[1811]: 2025-04-30 04:42:55.725 [INFO][7025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109" Apr 30 04:42:55.726975 containerd[1811]: time="2025-04-30T04:42:55.726667234Z" level=info msg="TearDown network for sandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\" successfully" Apr 30 04:42:55.728280 containerd[1811]: time="2025-04-30T04:42:55.728235150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 04:42:55.728280 containerd[1811]: time="2025-04-30T04:42:55.728265558Z" level=info msg="RemovePodSandbox \"c0b8d139d4c30f2aaebaac5e3fd74dde066abf55564cfae5db2788ed8050d109\" returns successfully" Apr 30 04:42:55.728568 containerd[1811]: time="2025-04-30T04:42:55.728538911Z" level=info msg="StopPodSandbox for \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\"" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.747 [WARNING][7069] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c", Pod:"coredns-7db6d8ff4d-tx9ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib867b4301d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.747 [INFO][7069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.747 [INFO][7069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" iface="eth0" netns="" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.747 [INFO][7069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.747 [INFO][7069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.757 [INFO][7083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.757 [INFO][7083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.757 [INFO][7083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.761 [WARNING][7083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.761 [INFO][7083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.762 [INFO][7083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.763826 containerd[1811]: 2025-04-30 04:42:55.763 [INFO][7069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.763826 containerd[1811]: time="2025-04-30T04:42:55.763812757Z" level=info msg="TearDown network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\" successfully" Apr 30 04:42:55.763826 containerd[1811]: time="2025-04-30T04:42:55.763827951Z" level=info msg="StopPodSandbox for \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\" returns successfully" Apr 30 04:42:55.764153 containerd[1811]: time="2025-04-30T04:42:55.764080713Z" level=info msg="RemovePodSandbox for \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\"" Apr 30 04:42:55.764153 containerd[1811]: time="2025-04-30T04:42:55.764097255Z" level=info msg="Forcibly stopping sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\"" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.782 [WARNING][7110] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9d4fdb4c-763d-4e1c-b8ab-5cdb6a665269", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"fe6d9af776c56207cf5c2b8e5f2cb7f215fd4ad586391bc7ba22cf2f7da82a8c", Pod:"coredns-7db6d8ff4d-tx9ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib867b4301d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.782 [INFO][7110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.782 [INFO][7110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" iface="eth0" netns="" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.782 [INFO][7110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.782 [INFO][7110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.793 [INFO][7122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.793 [INFO][7122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.793 [INFO][7122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.797 [WARNING][7122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.797 [INFO][7122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" HandleID="k8s-pod-network.095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Workload="ci--4081.3.3--a--671b97f93d-k8s-coredns--7db6d8ff4d--tx9ch-eth0" Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.798 [INFO][7122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.800097 containerd[1811]: 2025-04-30 04:42:55.799 [INFO][7110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b" Apr 30 04:42:55.800414 containerd[1811]: time="2025-04-30T04:42:55.800120268Z" level=info msg="TearDown network for sandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\" successfully" Apr 30 04:42:55.801483 containerd[1811]: time="2025-04-30T04:42:55.801440717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 04:42:55.801483 containerd[1811]: time="2025-04-30T04:42:55.801470788Z" level=info msg="RemovePodSandbox \"095c15aa7b4c538f5339e16c77356bd24fb15e742020339447378f684f09686b\" returns successfully" Apr 30 04:42:55.801737 containerd[1811]: time="2025-04-30T04:42:55.801726160Z" level=info msg="StopPodSandbox for \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\"" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.822 [WARNING][7152] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f73889e-3589-43d5-a030-03e0590e642a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5", Pod:"calico-apiserver-5868569d6c-fwfcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5fd2686ab8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.823 [INFO][7152] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.823 [INFO][7152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" iface="eth0" netns="" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.823 [INFO][7152] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.823 [INFO][7152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.838 [INFO][7167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.838 [INFO][7167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.838 [INFO][7167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.842 [WARNING][7167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.842 [INFO][7167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.844 [INFO][7167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.845422 containerd[1811]: 2025-04-30 04:42:55.844 [INFO][7152] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.845422 containerd[1811]: time="2025-04-30T04:42:55.845416990Z" level=info msg="TearDown network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\" successfully" Apr 30 04:42:55.845786 containerd[1811]: time="2025-04-30T04:42:55.845432067Z" level=info msg="StopPodSandbox for \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\" returns successfully" Apr 30 04:42:55.845786 containerd[1811]: time="2025-04-30T04:42:55.845687642Z" level=info msg="RemovePodSandbox for \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\"" Apr 30 04:42:55.845786 containerd[1811]: time="2025-04-30T04:42:55.845706966Z" level=info msg="Forcibly stopping sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\"" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.865 [WARNING][7195] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0", GenerateName:"calico-apiserver-5868569d6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f73889e-3589-43d5-a030-03e0590e642a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868569d6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-671b97f93d", ContainerID:"fc59e0ab8ef26d51497f06d04a7d883dface0e7fe1e01defa6bbf58cbfffbee5", Pod:"calico-apiserver-5868569d6c-fwfcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5fd2686ab8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.865 [INFO][7195] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.865 [INFO][7195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" iface="eth0" netns="" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.865 [INFO][7195] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.865 [INFO][7195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.876 [INFO][7209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.876 [INFO][7209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.876 [INFO][7209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.880 [WARNING][7209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.880 [INFO][7209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" HandleID="k8s-pod-network.f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Workload="ci--4081.3.3--a--671b97f93d-k8s-calico--apiserver--5868569d6c--fwfcx-eth0" Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.881 [INFO][7209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 04:42:55.883872 containerd[1811]: 2025-04-30 04:42:55.882 [INFO][7195] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb" Apr 30 04:42:55.884184 containerd[1811]: time="2025-04-30T04:42:55.883892824Z" level=info msg="TearDown network for sandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\" successfully" Apr 30 04:42:55.885425 containerd[1811]: time="2025-04-30T04:42:55.885411200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 04:42:55.885463 containerd[1811]: time="2025-04-30T04:42:55.885438537Z" level=info msg="RemovePodSandbox \"f5c0743b672c673bbdc2ab3488fa93d79bb79a1c08df50de94bb29d833c6d3bb\" returns successfully" Apr 30 04:43:16.347459 systemd[1]: Started sshd@10-147.75.90.169:22-218.92.0.103:10444.service - OpenSSH per-connection server daemon (218.92.0.103:10444). Apr 30 04:43:17.360306 sshd[7291]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.103 user=root Apr 30 04:43:19.505656 sshd[7289]: PAM: Permission denied for root from 218.92.0.103 Apr 30 04:43:19.783290 sshd[7292]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.103 user=root Apr 30 04:43:21.537503 sshd[7289]: PAM: Permission denied for root from 218.92.0.103 Apr 30 04:43:21.814431 sshd[7293]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.103 user=root Apr 30 04:43:23.177156 sshd[7289]: PAM: Permission denied for root from 218.92.0.103 Apr 30 04:43:23.315443 sshd[7289]: Received disconnect from 218.92.0.103 port 10444:11: [preauth] Apr 30 04:43:23.315443 sshd[7289]: Disconnected from authenticating user root 218.92.0.103 port 10444 [preauth] Apr 30 04:43:23.318891 systemd[1]: sshd@10-147.75.90.169:22-218.92.0.103:10444.service: Deactivated successfully. Apr 30 04:43:28.244556 systemd[1]: Started sshd@11-147.75.90.169:22-218.92.0.158:30403.service - OpenSSH per-connection server daemon (218.92.0.158:30403). Apr 30 04:43:29.425537 sshd[7327]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:43:31.550583 sshd[7325]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:43:31.878067 sshd[7346]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:43:33.611447 sshd[7325]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:43:33.935453 sshd[7347]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:43:35.277024 sshd[7325]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:43:35.438948 sshd[7325]: Received disconnect from 218.92.0.158 port 30403:11: [preauth] Apr 30 04:43:35.438948 sshd[7325]: Disconnected from authenticating user root 218.92.0.158 port 30403 [preauth] Apr 30 04:43:35.440463 systemd[1]: sshd@11-147.75.90.169:22-218.92.0.158:30403.service: Deactivated successfully. Apr 30 04:44:11.068451 systemd[1]: Started sshd@12-147.75.90.169:22-27.254.192.185:38324.service - OpenSSH per-connection server daemon (27.254.192.185:38324). Apr 30 04:44:12.136201 sshd[7442]: Invalid user admin from 27.254.192.185 port 38324 Apr 30 04:44:12.334303 sshd[7442]: Received disconnect from 27.254.192.185 port 38324:11: Bye Bye [preauth] Apr 30 04:44:12.334303 sshd[7442]: Disconnected from invalid user admin 27.254.192.185 port 38324 [preauth] Apr 30 04:44:12.339132 systemd[1]: sshd@12-147.75.90.169:22-27.254.192.185:38324.service: Deactivated successfully. Apr 30 04:44:39.985490 systemd[1]: Started sshd@13-147.75.90.169:22-110.10.129.56:34418.service - OpenSSH per-connection server daemon (110.10.129.56:34418). Apr 30 04:44:40.767974 sshd[7538]: Invalid user ubuntu from 110.10.129.56 port 34418 Apr 30 04:44:40.906657 sshd[7538]: Received disconnect from 110.10.129.56 port 34418:11: Bye Bye [preauth] Apr 30 04:44:40.906657 sshd[7538]: Disconnected from invalid user ubuntu 110.10.129.56 port 34418 [preauth] Apr 30 04:44:40.909896 systemd[1]: sshd@13-147.75.90.169:22-110.10.129.56:34418.service: Deactivated successfully. Apr 30 04:45:11.746232 systemd[1]: Started sshd@14-147.75.90.169:22-203.205.37.233:38572.service - OpenSSH per-connection server daemon (203.205.37.233:38572). Apr 30 04:45:12.812696 sshd[7598]: Invalid user jason from 203.205.37.233 port 38572 Apr 30 04:45:13.008440 sshd[7598]: Received disconnect from 203.205.37.233 port 38572:11: Bye Bye [preauth] Apr 30 04:45:13.008440 sshd[7598]: Disconnected from invalid user jason 203.205.37.233 port 38572 [preauth] Apr 30 04:45:13.011698 systemd[1]: sshd@14-147.75.90.169:22-203.205.37.233:38572.service: Deactivated successfully. Apr 30 04:45:35.944396 systemd[1]: Started sshd@15-147.75.90.169:22-218.92.0.158:14790.service - OpenSSH per-connection server daemon (218.92.0.158:14790). Apr 30 04:45:36.941549 sshd[7679]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:45:38.836346 sshd[7677]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:45:39.109466 sshd[7680]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:45:40.612637 sshd[7677]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:45:40.885599 sshd[7681]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:45:42.328734 sshd[7677]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:45:42.464525 sshd[7677]: Received disconnect from 218.92.0.158 port 14790:11: [preauth] Apr 30 04:45:42.464525 sshd[7677]: Disconnected from authenticating user root 218.92.0.158 port 14790 [preauth] Apr 30 04:45:42.467993 systemd[1]: sshd@15-147.75.90.169:22-218.92.0.158:14790.service: Deactivated successfully. Apr 30 04:45:55.010544 systemd[1]: Started sshd@16-147.75.90.169:22-101.226.180.6:64428.service - OpenSSH per-connection server daemon (101.226.180.6:64428). Apr 30 04:45:55.751808 sshd[7702]: Invalid user admin from 101.226.180.6 port 64428 Apr 30 04:45:55.883280 sshd[7702]: Received disconnect from 101.226.180.6 port 64428:11: Bye Bye [preauth] Apr 30 04:45:55.883280 sshd[7702]: Disconnected from invalid user admin 101.226.180.6 port 64428 [preauth] Apr 30 04:45:55.886597 systemd[1]: sshd@16-147.75.90.169:22-101.226.180.6:64428.service: Deactivated successfully. Apr 30 04:46:54.580123 systemd[1]: Started sshd@17-147.75.90.169:22-14.103.118.120:44946.service - OpenSSH per-connection server daemon (14.103.118.120:44946). Apr 30 04:46:56.469292 sshd[7837]: Invalid user raquel from 14.103.118.120 port 44946 Apr 30 04:46:56.605633 sshd[7837]: Received disconnect from 14.103.118.120 port 44946:11: Bye Bye [preauth] Apr 30 04:46:56.605633 sshd[7837]: Disconnected from invalid user raquel 14.103.118.120 port 44946 [preauth] Apr 30 04:46:56.608967 systemd[1]: sshd@17-147.75.90.169:22-14.103.118.120:44946.service: Deactivated successfully. Apr 30 04:47:43.128181 systemd[1]: Started sshd@18-147.75.90.169:22-218.92.0.158:41422.service - OpenSSH per-connection server daemon (218.92.0.158:41422). Apr 30 04:47:44.179062 sshd[7983]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:47:45.977553 sshd[7981]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:47:46.252981 sshd[7984]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:47:48.328294 sshd[7981]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:47:48.604583 sshd[7985]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Apr 30 04:47:49.537514 systemd[1]: Started sshd@19-147.75.90.169:22-45.79.207.71:56827.service - OpenSSH per-connection server daemon (45.79.207.71:56827). Apr 30 04:47:49.745685 sshd[7987]: kex_exchange_identification: read: Connection reset by peer Apr 30 04:47:49.745685 sshd[7987]: Connection reset by 45.79.207.71 port 56827 Apr 30 04:47:49.748849 systemd[1]: sshd@19-147.75.90.169:22-45.79.207.71:56827.service: Deactivated successfully. Apr 30 04:47:50.955645 sshd[7981]: PAM: Permission denied for root from 218.92.0.158 Apr 30 04:47:51.092921 sshd[7981]: Received disconnect from 218.92.0.158 port 41422:11: [preauth] Apr 30 04:47:51.092921 sshd[7981]: Disconnected from authenticating user root 218.92.0.158 port 41422 [preauth] Apr 30 04:47:51.096421 systemd[1]: sshd@18-147.75.90.169:22-218.92.0.158:41422.service: Deactivated successfully. Apr 30 04:47:51.209819 update_engine[1806]: I20250430 04:47:51.209532 1806 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 04:47:51.209819 update_engine[1806]: I20250430 04:47:51.209640 1806 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 04:47:51.210963 update_engine[1806]: I20250430 04:47:51.210018 1806 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 04:47:51.211110 update_engine[1806]: I20250430 04:47:51.211058 1806 omaha_request_params.cc:62] Current group set to lts Apr 30 04:47:51.211389 update_engine[1806]: I20250430 04:47:51.211300 1806 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 04:47:51.211389 update_engine[1806]: I20250430 04:47:51.211337 1806 update_attempter.cc:643] Scheduling an action processor start. Apr 30 04:47:51.211389 update_engine[1806]: I20250430 04:47:51.211377 1806 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 04:47:51.211734 update_engine[1806]: I20250430 04:47:51.211462 1806 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 04:47:51.211734 update_engine[1806]: I20250430 04:47:51.211628 1806 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 04:47:51.211734 update_engine[1806]: I20250430 04:47:51.211660 1806 omaha_request_action.cc:272] Request: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: Apr 30 04:47:51.211734 update_engine[1806]: I20250430 04:47:51.211678 1806 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 04:47:51.212741 locksmithd[1840]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 04:47:51.214397 update_engine[1806]: I20250430 04:47:51.214337 1806 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 04:47:51.214547 update_engine[1806]: I20250430 04:47:51.214502 1806 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 04:47:51.218600 update_engine[1806]: E20250430 04:47:51.218557 1806 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 04:47:51.218600 update_engine[1806]: I20250430 04:47:51.218592 1806 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 04:47:56.685491 systemd[1]: Started sshd@20-147.75.90.169:22-101.226.180.6:61850.service - OpenSSH per-connection server daemon (101.226.180.6:61850). Apr 30 04:47:57.417450 sshd[7997]: Invalid user ubuntu from 101.226.180.6 port 61850 Apr 30 04:47:57.548876 sshd[7997]: Received disconnect from 101.226.180.6 port 61850:11: Bye Bye [preauth] Apr 30 04:47:57.548876 sshd[7997]: Disconnected from invalid user ubuntu 101.226.180.6 port 61850 [preauth] Apr 30 04:47:57.553697 systemd[1]: sshd@20-147.75.90.169:22-101.226.180.6:61850.service: Deactivated successfully. Apr 30 04:48:01.191866 update_engine[1806]: I20250430 04:48:01.191691 1806 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 04:48:01.192762 update_engine[1806]: I20250430 04:48:01.192248 1806 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 04:48:01.192927 update_engine[1806]: I20250430 04:48:01.192800 1806 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 04:48:01.337462 update_engine[1806]: E20250430 04:48:01.337298 1806 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 04:48:01.337726 update_engine[1806]: I20250430 04:48:01.337479 1806 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 04:48:05.073027 systemd[1]: Started sshd@21-147.75.90.169:22-139.178.68.195:36330.service - OpenSSH per-connection server daemon (139.178.68.195:36330). Apr 30 04:48:05.126488 sshd[8053]: Accepted publickey for core from 139.178.68.195 port 36330 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:05.130177 sshd[8053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:05.142241 systemd-logind[1801]: New session 12 of user core. Apr 30 04:48:05.153778 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 04:48:05.254146 sshd[8053]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:05.255908 systemd[1]: sshd@21-147.75.90.169:22-139.178.68.195:36330.service: Deactivated successfully. Apr 30 04:48:05.256898 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 04:48:05.257704 systemd-logind[1801]: Session 12 logged out. Waiting for processes to exit. Apr 30 04:48:05.258255 systemd-logind[1801]: Removed session 12. Apr 30 04:48:10.296449 systemd[1]: Started sshd@22-147.75.90.169:22-139.178.68.195:36342.service - OpenSSH per-connection server daemon (139.178.68.195:36342). Apr 30 04:48:10.333620 sshd[8088]: Accepted publickey for core from 139.178.68.195 port 36342 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:10.334485 sshd[8088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:10.337611 systemd-logind[1801]: New session 13 of user core. Apr 30 04:48:10.349509 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 04:48:10.435073 sshd[8088]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:10.436674 systemd[1]: sshd@22-147.75.90.169:22-139.178.68.195:36342.service: Deactivated successfully. Apr 30 04:48:10.437625 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 04:48:10.438320 systemd-logind[1801]: Session 13 logged out. Waiting for processes to exit. Apr 30 04:48:10.439058 systemd-logind[1801]: Removed session 13. Apr 30 04:48:11.182566 update_engine[1806]: I20250430 04:48:11.182376 1806 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 04:48:11.183443 update_engine[1806]: I20250430 04:48:11.182935 1806 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 04:48:11.183674 update_engine[1806]: I20250430 04:48:11.183492 1806 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 04:48:11.184252 update_engine[1806]: E20250430 04:48:11.184139 1806 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 04:48:11.184436 update_engine[1806]: I20250430 04:48:11.184304 1806 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 04:48:15.450431 systemd[1]: Started sshd@23-147.75.90.169:22-139.178.68.195:36682.service - OpenSSH per-connection server daemon (139.178.68.195:36682). Apr 30 04:48:15.481764 sshd[8118]: Accepted publickey for core from 139.178.68.195 port 36682 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:15.482458 sshd[8118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:15.485018 systemd-logind[1801]: New session 14 of user core. Apr 30 04:48:15.497383 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 04:48:15.584870 sshd[8118]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:15.607544 systemd[1]: sshd@23-147.75.90.169:22-139.178.68.195:36682.service: Deactivated successfully. Apr 30 04:48:15.611557 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 04:48:15.615028 systemd-logind[1801]: Session 14 logged out. Waiting for processes to exit. Apr 30 04:48:15.626092 systemd[1]: Started sshd@24-147.75.90.169:22-139.178.68.195:36698.service - OpenSSH per-connection server daemon (139.178.68.195:36698). Apr 30 04:48:15.628537 systemd-logind[1801]: Removed session 14. Apr 30 04:48:15.689058 sshd[8145]: Accepted publickey for core from 139.178.68.195 port 36698 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:15.689883 sshd[8145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:15.692718 systemd-logind[1801]: New session 15 of user core. Apr 30 04:48:15.704385 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 04:48:15.808034 sshd[8145]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:15.822211 systemd[1]: sshd@24-147.75.90.169:22-139.178.68.195:36698.service: Deactivated successfully. Apr 30 04:48:15.823216 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 04:48:15.824070 systemd-logind[1801]: Session 15 logged out. Waiting for processes to exit. Apr 30 04:48:15.824808 systemd[1]: Started sshd@25-147.75.90.169:22-139.178.68.195:36712.service - OpenSSH per-connection server daemon (139.178.68.195:36712). Apr 30 04:48:15.825306 systemd-logind[1801]: Removed session 15. Apr 30 04:48:15.857812 sshd[8169]: Accepted publickey for core from 139.178.68.195 port 36712 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:15.858566 sshd[8169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:15.861654 systemd-logind[1801]: New session 16 of user core. Apr 30 04:48:15.880442 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 04:48:16.019751 sshd[8169]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:16.021689 systemd[1]: sshd@25-147.75.90.169:22-139.178.68.195:36712.service: Deactivated successfully. Apr 30 04:48:16.022875 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 04:48:16.023864 systemd-logind[1801]: Session 16 logged out. Waiting for processes to exit. Apr 30 04:48:16.024695 systemd-logind[1801]: Removed session 16. Apr 30 04:48:21.042568 systemd[1]: Started sshd@26-147.75.90.169:22-139.178.68.195:36724.service - OpenSSH per-connection server daemon (139.178.68.195:36724). Apr 30 04:48:21.075740 sshd[8195]: Accepted publickey for core from 139.178.68.195 port 36724 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:21.076421 sshd[8195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:21.078991 systemd-logind[1801]: New session 17 of user core. Apr 30 04:48:21.094358 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 04:48:21.180149 sshd[8195]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:21.181870 systemd[1]: sshd@26-147.75.90.169:22-139.178.68.195:36724.service: Deactivated successfully. Apr 30 04:48:21.182155 update_engine[1806]: I20250430 04:48:21.182107 1806 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 04:48:21.182311 update_engine[1806]: I20250430 04:48:21.182219 1806 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 04:48:21.182380 update_engine[1806]: I20250430 04:48:21.182337 1806 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 04:48:21.182831 update_engine[1806]: E20250430 04:48:21.182789 1806 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 04:48:21.182831 update_engine[1806]: I20250430 04:48:21.182816 1806 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 04:48:21.182831 update_engine[1806]: I20250430 04:48:21.182822 1806 omaha_request_action.cc:617] Omaha request response: Apr 30 04:48:21.182818 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 04:48:21.182938 update_engine[1806]: E20250430 04:48:21.182862 1806 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 04:48:21.182938 update_engine[1806]: I20250430 04:48:21.182875 1806 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 04:48:21.182938 update_engine[1806]: I20250430 04:48:21.182879 1806 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 04:48:21.182938 update_engine[1806]: I20250430 04:48:21.182881 1806 update_attempter.cc:306] Processing Done. Apr 30 04:48:21.182938 update_engine[1806]: E20250430 04:48:21.182889 1806 update_attempter.cc:619] Update failed. Apr 30 04:48:21.182938 update_engine[1806]: I20250430 04:48:21.182893 1806 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 04:48:21.182938 update_engine[1806]: I20250430 04:48:21.182895 1806 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 04:48:21.182938 update_engine[1806]: I20250430 04:48:21.182898 1806 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 04:48:21.182938 update_engine[1806]: I20250430 04:48:21.182929 1806 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 04:48:21.183173 update_engine[1806]: I20250430 04:48:21.182945 1806 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 04:48:21.183173 update_engine[1806]: I20250430 04:48:21.182948 1806 omaha_request_action.cc:272] Request: Apr 30 04:48:21.183173 update_engine[1806]: Apr 30 04:48:21.183173 update_engine[1806]: Apr 30 04:48:21.183173 update_engine[1806]: Apr 30 04:48:21.183173 update_engine[1806]: Apr 30 04:48:21.183173 update_engine[1806]: Apr 30 04:48:21.183173 update_engine[1806]: Apr 30 04:48:21.183173 update_engine[1806]: I20250430 04:48:21.182951 1806 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 04:48:21.183173 update_engine[1806]: I20250430 04:48:21.183025 1806 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 04:48:21.183173 update_engine[1806]: I20250430 04:48:21.183105 1806 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 04:48:21.183355 locksmithd[1840]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 04:48:21.183526 systemd-logind[1801]: Session 17 logged out. Waiting for processes to exit. Apr 30 04:48:21.183573 update_engine[1806]: E20250430 04:48:21.183558 1806 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 04:48:21.183597 update_engine[1806]: I20250430 04:48:21.183583 1806 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 04:48:21.183597 update_engine[1806]: I20250430 04:48:21.183588 1806 omaha_request_action.cc:617] Omaha request response: Apr 30 04:48:21.183597 update_engine[1806]: I20250430 04:48:21.183591 1806 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 04:48:21.183597 update_engine[1806]: I20250430 04:48:21.183594 1806 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 04:48:21.183682 update_engine[1806]: I20250430 04:48:21.183597 1806 update_attempter.cc:306] Processing Done. Apr 30 04:48:21.183682 update_engine[1806]: I20250430 04:48:21.183601 1806 update_attempter.cc:310] Error event sent. Apr 30 04:48:21.183682 update_engine[1806]: I20250430 04:48:21.183607 1806 update_check_scheduler.cc:74] Next update check in 47m25s Apr 30 04:48:21.183770 locksmithd[1840]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 04:48:21.184075 systemd-logind[1801]: Removed session 17. Apr 30 04:48:26.197183 systemd[1]: Started sshd@27-147.75.90.169:22-139.178.68.195:35084.service - OpenSSH per-connection server daemon (139.178.68.195:35084). Apr 30 04:48:26.227621 sshd[8226]: Accepted publickey for core from 139.178.68.195 port 35084 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:26.228391 sshd[8226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:26.231209 systemd-logind[1801]: New session 18 of user core. Apr 30 04:48:26.251538 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 04:48:26.338698 sshd[8226]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:26.340495 systemd[1]: sshd@27-147.75.90.169:22-139.178.68.195:35084.service: Deactivated successfully. Apr 30 04:48:26.341500 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 04:48:26.342270 systemd-logind[1801]: Session 18 logged out. Waiting for processes to exit. Apr 30 04:48:26.343028 systemd-logind[1801]: Removed session 18. Apr 30 04:48:31.364414 systemd[1]: Started sshd@28-147.75.90.169:22-139.178.68.195:35096.service - OpenSSH per-connection server daemon (139.178.68.195:35096). Apr 30 04:48:31.394770 sshd[8280]: Accepted publickey for core from 139.178.68.195 port 35096 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:31.395547 sshd[8280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:31.398091 systemd-logind[1801]: New session 19 of user core. Apr 30 04:48:31.415561 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 04:48:31.502896 sshd[8280]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:31.504530 systemd[1]: sshd@28-147.75.90.169:22-139.178.68.195:35096.service: Deactivated successfully. Apr 30 04:48:31.505524 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 04:48:31.506253 systemd-logind[1801]: Session 19 logged out. Waiting for processes to exit. Apr 30 04:48:31.506929 systemd-logind[1801]: Removed session 19. Apr 30 04:48:36.544528 systemd[1]: Started sshd@29-147.75.90.169:22-139.178.68.195:53248.service - OpenSSH per-connection server daemon (139.178.68.195:53248). Apr 30 04:48:36.591317 sshd[8347]: Accepted publickey for core from 139.178.68.195 port 53248 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:36.592283 sshd[8347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:36.595860 systemd-logind[1801]: New session 20 of user core. Apr 30 04:48:36.605400 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 04:48:36.696405 sshd[8347]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:36.697878 systemd[1]: sshd@29-147.75.90.169:22-139.178.68.195:53248.service: Deactivated successfully. Apr 30 04:48:36.698795 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 04:48:36.699560 systemd-logind[1801]: Session 20 logged out. Waiting for processes to exit. Apr 30 04:48:36.700104 systemd-logind[1801]: Removed session 20. Apr 30 04:48:41.716593 systemd[1]: Started sshd@30-147.75.90.169:22-139.178.68.195:53250.service - OpenSSH per-connection server daemon (139.178.68.195:53250). Apr 30 04:48:41.746527 sshd[8375]: Accepted publickey for core from 139.178.68.195 port 53250 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:41.747192 sshd[8375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:41.749736 systemd-logind[1801]: New session 21 of user core. Apr 30 04:48:41.761560 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 04:48:41.845052 sshd[8375]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:41.856081 systemd[1]: sshd@30-147.75.90.169:22-139.178.68.195:53250.service: Deactivated successfully. Apr 30 04:48:41.856962 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 04:48:41.857708 systemd-logind[1801]: Session 21 logged out. Waiting for processes to exit. Apr 30 04:48:41.858437 systemd[1]: Started sshd@31-147.75.90.169:22-139.178.68.195:53258.service - OpenSSH per-connection server daemon (139.178.68.195:53258). Apr 30 04:48:41.859037 systemd-logind[1801]: Removed session 21. Apr 30 04:48:41.898696 sshd[8401]: Accepted publickey for core from 139.178.68.195 port 53258 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:41.899522 sshd[8401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:41.902642 systemd-logind[1801]: New session 22 of user core. Apr 30 04:48:41.922475 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 04:48:42.076279 sshd[8401]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:42.094090 systemd[1]: sshd@31-147.75.90.169:22-139.178.68.195:53258.service: Deactivated successfully. Apr 30 04:48:42.095030 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 04:48:42.095881 systemd-logind[1801]: Session 22 logged out. Waiting for processes to exit. Apr 30 04:48:42.096730 systemd[1]: Started sshd@32-147.75.90.169:22-139.178.68.195:53270.service - OpenSSH per-connection server daemon (139.178.68.195:53270). Apr 30 04:48:42.097298 systemd-logind[1801]: Removed session 22. Apr 30 04:48:42.150521 sshd[8424]: Accepted publickey for core from 139.178.68.195 port 53270 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:42.151825 sshd[8424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:42.156253 systemd-logind[1801]: New session 23 of user core. Apr 30 04:48:42.163420 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 04:48:43.501689 sshd[8424]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:43.517343 systemd[1]: sshd@32-147.75.90.169:22-139.178.68.195:53270.service: Deactivated successfully. Apr 30 04:48:43.518536 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 04:48:43.519475 systemd-logind[1801]: Session 23 logged out. Waiting for processes to exit. Apr 30 04:48:43.520475 systemd[1]: Started sshd@33-147.75.90.169:22-139.178.68.195:53282.service - OpenSSH per-connection server daemon (139.178.68.195:53282). Apr 30 04:48:43.521138 systemd-logind[1801]: Removed session 23. Apr 30 04:48:43.563191 sshd[8454]: Accepted publickey for core from 139.178.68.195 port 53282 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:43.564253 sshd[8454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:43.567779 systemd-logind[1801]: New session 24 of user core. Apr 30 04:48:43.579503 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 04:48:43.759903 sshd[8454]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:43.781293 systemd[1]: sshd@33-147.75.90.169:22-139.178.68.195:53282.service: Deactivated successfully. Apr 30 04:48:43.782233 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 04:48:43.782652 systemd-logind[1801]: Session 24 logged out. Waiting for processes to exit. Apr 30 04:48:43.783719 systemd[1]: Started sshd@34-147.75.90.169:22-139.178.68.195:53298.service - OpenSSH per-connection server daemon (139.178.68.195:53298). Apr 30 04:48:43.784291 systemd-logind[1801]: Removed session 24. Apr 30 04:48:43.814373 sshd[8483]: Accepted publickey for core from 139.178.68.195 port 53298 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:43.815083 sshd[8483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:43.817833 systemd-logind[1801]: New session 25 of user core. Apr 30 04:48:43.836418 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 04:48:43.975643 sshd[8483]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:43.977337 systemd[1]: sshd@34-147.75.90.169:22-139.178.68.195:53298.service: Deactivated successfully. Apr 30 04:48:43.978350 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 04:48:43.979127 systemd-logind[1801]: Session 25 logged out. Waiting for processes to exit. Apr 30 04:48:43.979863 systemd-logind[1801]: Removed session 25. Apr 30 04:48:48.995636 systemd[1]: Started sshd@35-147.75.90.169:22-139.178.68.195:47386.service - OpenSSH per-connection server daemon (139.178.68.195:47386). Apr 30 04:48:49.026465 sshd[8513]: Accepted publickey for core from 139.178.68.195 port 47386 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:49.027130 sshd[8513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:49.029756 systemd-logind[1801]: New session 26 of user core. Apr 30 04:48:49.045409 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 04:48:49.128596 sshd[8513]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:49.130167 systemd[1]: sshd@35-147.75.90.169:22-139.178.68.195:47386.service: Deactivated successfully. Apr 30 04:48:49.131077 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 04:48:49.131865 systemd-logind[1801]: Session 26 logged out. Waiting for processes to exit. Apr 30 04:48:49.132469 systemd-logind[1801]: Removed session 26. Apr 30 04:48:54.146661 systemd[1]: Started sshd@36-147.75.90.169:22-139.178.68.195:47390.service - OpenSSH per-connection server daemon (139.178.68.195:47390). Apr 30 04:48:54.177382 sshd[8541]: Accepted publickey for core from 139.178.68.195 port 47390 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:54.178203 sshd[8541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:54.181097 systemd-logind[1801]: New session 27 of user core. Apr 30 04:48:54.204726 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 04:48:54.297066 sshd[8541]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:54.299924 systemd[1]: sshd@36-147.75.90.169:22-139.178.68.195:47390.service: Deactivated successfully. Apr 30 04:48:54.301619 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 04:48:54.303005 systemd-logind[1801]: Session 27 logged out. Waiting for processes to exit. Apr 30 04:48:54.304211 systemd-logind[1801]: Removed session 27. Apr 30 04:48:59.323540 systemd[1]: Started sshd@37-147.75.90.169:22-139.178.68.195:43208.service - OpenSSH per-connection server daemon (139.178.68.195:43208). Apr 30 04:48:59.351151 sshd[8587]: Accepted publickey for core from 139.178.68.195 port 43208 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 04:48:59.351935 sshd[8587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 04:48:59.354575 systemd-logind[1801]: New session 28 of user core. Apr 30 04:48:59.376440 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 04:48:59.461165 sshd[8587]: pam_unix(sshd:session): session closed for user core Apr 30 04:48:59.462910 systemd[1]: sshd@37-147.75.90.169:22-139.178.68.195:43208.service: Deactivated successfully. Apr 30 04:48:59.463870 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 04:48:59.464653 systemd-logind[1801]: Session 28 logged out. Waiting for processes to exit. Apr 30 04:48:59.465249 systemd-logind[1801]: Removed session 28.