May 13 23:58:23.468260 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:58:23.468291 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:58:23.468297 kernel: BIOS-provided physical RAM map: May 13 23:58:23.468303 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable May 13 23:58:23.468321 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved May 13 23:58:23.468325 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved May 13 23:58:23.468330 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable May 13 23:58:23.468334 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved May 13 23:58:23.468338 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c9fff] usable May 13 23:58:23.468342 kernel: BIOS-e820: [mem 0x00000000819ca000-0x00000000819cafff] ACPI NVS May 13 23:58:23.468346 kernel: BIOS-e820: [mem 0x00000000819cb000-0x00000000819cbfff] reserved May 13 23:58:23.468350 kernel: BIOS-e820: [mem 0x00000000819cc000-0x000000008afccfff] usable May 13 23:58:23.468356 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved May 13 23:58:23.468360 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable May 13 23:58:23.468365 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS May 13 23:58:23.468370 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved May 13 23:58:23.468374 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable May 13 23:58:23.468380 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved May 13 23:58:23.468384 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 23:58:23.468389 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved May 13 23:58:23.468394 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved May 13 23:58:23.468398 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 13 23:58:23.468403 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved May 13 23:58:23.468408 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable May 13 23:58:23.468412 kernel: NX (Execute Disable) protection: active May 13 23:58:23.468417 kernel: APIC: Static calls initialized May 13 23:58:23.468421 kernel: SMBIOS 3.2.1 present. May 13 23:58:23.468426 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 May 13 23:58:23.468432 kernel: tsc: Detected 3400.000 MHz processor May 13 23:58:23.468437 kernel: tsc: Detected 3399.906 MHz TSC May 13 23:58:23.468441 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:58:23.468447 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:58:23.468451 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 May 13 23:58:23.468456 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs May 13 23:58:23.468461 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:58:23.468466 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 May 13 23:58:23.468471 kernel: Using GB pages for direct mapping May 13 23:58:23.468476 kernel: ACPI: Early table checksum verification disabled May 13 23:58:23.468481 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) May 13 23:58:23.468487 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) May 13 23:58:23.468493 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) May 13 23:58:23.468498 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) May 13 23:58:23.468503 kernel: ACPI: FACS 0x000000008C66CF80 000040 May 13 23:58:23.468508 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) May 13 23:58:23.468515 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) May 13 23:58:23.468520 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) May 13 23:58:23.468525 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) May 13 23:58:23.468530 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) May 13 23:58:23.468535 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) May 13 23:58:23.468540 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) May 13 23:58:23.468545 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) May 13 23:58:23.468551 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) May 13 23:58:23.468556 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) May 13 23:58:23.468561 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) May 13 23:58:23.468566 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) May 13 23:58:23.468571 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) May 13 23:58:23.468576 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) May 13 23:58:23.468581 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) May 13 23:58:23.468586 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) May 13 23:58:23.468591 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) May 13 23:58:23.468597 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) May 13 23:58:23.468602 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) May 13 23:58:23.468607 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) May 13 23:58:23.468612 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) May 13 23:58:23.468617 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) May 13 23:58:23.468622 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) May 13 23:58:23.468627 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) May 13 23:58:23.468632 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) May 13 23:58:23.468637 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) May 13 23:58:23.468643 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) May 13 23:58:23.468648 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) May 13 23:58:23.468653 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] May 13 23:58:23.468658 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] May 13 23:58:23.468663 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] May 13 23:58:23.468668 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] May 13 23:58:23.468673 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] May 13 23:58:23.468678 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] May 13 23:58:23.468685 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] May 13 23:58:23.468690 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] May 13 23:58:23.468695 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] May 13 23:58:23.468700 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] May 13 23:58:23.468705 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] May 13 23:58:23.468710 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] May 13 23:58:23.468715 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] May 13 23:58:23.468720 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] May 13 23:58:23.468725 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] May 13 23:58:23.468730 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] May 13 23:58:23.468736 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] May 13 23:58:23.468741 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] May 13 23:58:23.468746 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] May 13 23:58:23.468751 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] May 13 23:58:23.468755 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] May 13 23:58:23.468760 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] May 13 23:58:23.468766 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] May 13 23:58:23.468770 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] May 13 23:58:23.468776 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] May 13 23:58:23.468782 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] May 13 23:58:23.468787 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] May 13 23:58:23.468792 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] May 13 23:58:23.468797 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] May 13 23:58:23.468802 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] May 13 23:58:23.468807 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] May 13 23:58:23.468812 kernel: No NUMA configuration found May 13 23:58:23.468817 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] May 13 23:58:23.468822 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] May 13 23:58:23.468828 kernel: Zone ranges: May 13 23:58:23.468833 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:58:23.468838 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 23:58:23.468843 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] May 13 23:58:23.468848 kernel: Movable zone start for each node May 13 23:58:23.468853 kernel: Early memory node ranges May 13 23:58:23.468858 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] May 13 23:58:23.468863 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] May 13 23:58:23.468868 kernel: node 0: [mem 0x0000000040400000-0x00000000819c9fff] May 13 23:58:23.468873 kernel: node 0: [mem 0x00000000819cc000-0x000000008afccfff] May 13 23:58:23.468879 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] May 13 23:58:23.468884 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] May 13 23:58:23.468889 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] May 13 23:58:23.468897 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] May 13 23:58:23.468903 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:58:23.468909 kernel: On node 0, zone DMA: 103 pages in unavailable ranges May 13 23:58:23.468914 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges May 13 23:58:23.468920 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges May 13 23:58:23.468926 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges May 13 23:58:23.468931 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges May 13 23:58:23.468937 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges May 13 23:58:23.468942 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges May 13 23:58:23.468948 kernel: ACPI: PM-Timer IO Port: 0x1808 May 13 23:58:23.468953 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 13 23:58:23.468958 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 13 23:58:23.468964 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 13 23:58:23.468969 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 13 23:58:23.468975 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 13 23:58:23.468981 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 13 23:58:23.468986 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 13 23:58:23.468991 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 13 23:58:23.468997 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 13 23:58:23.469002 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 13 23:58:23.469007 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 13 23:58:23.469012 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 13 23:58:23.469018 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 13 23:58:23.469024 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 13 23:58:23.469030 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 13 23:58:23.469035 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 13 23:58:23.469040 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 May 13 23:58:23.469045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:58:23.469051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:58:23.469056 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:58:23.469062 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:58:23.469067 kernel: TSC deadline timer available May 13 23:58:23.469073 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs May 13 23:58:23.469079 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices May 13 23:58:23.469084 kernel: Booting paravirtualized kernel on bare hardware May 13 23:58:23.469090 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:58:23.469095 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 13 23:58:23.469101 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 13 23:58:23.469106 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 13 23:58:23.469111 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 13 23:58:23.469117 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:58:23.469124 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:58:23.469130 kernel: random: crng init done May 13 23:58:23.469135 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) May 13 23:58:23.469140 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) May 13 23:58:23.469146 kernel: Fallback order for Node 0: 0 May 13 23:58:23.469151 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 May 13 23:58:23.469156 kernel: Policy zone: Normal May 13 23:58:23.469162 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:58:23.469168 kernel: software IO TLB: area num 16. May 13 23:58:23.469174 kernel: Memory: 32716212K/33452980K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 736508K reserved, 0K cma-reserved) May 13 23:58:23.469179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 13 23:58:23.469185 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:58:23.469190 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:58:23.469196 kernel: Dynamic Preempt: voluntary May 13 23:58:23.469201 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:58:23.469207 kernel: rcu: RCU event tracing is enabled. May 13 23:58:23.469212 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 13 23:58:23.469218 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:58:23.469224 kernel: Rude variant of Tasks RCU enabled. May 13 23:58:23.469229 kernel: Tracing variant of Tasks RCU enabled. May 13 23:58:23.469235 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:58:23.469240 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 13 23:58:23.469245 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 May 13 23:58:23.469253 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:58:23.469259 kernel: Console: colour VGA+ 80x25 May 13 23:58:23.469264 kernel: printk: console [tty0] enabled May 13 23:58:23.469269 kernel: printk: console [ttyS1] enabled May 13 23:58:23.469276 kernel: ACPI: Core revision 20230628 May 13 23:58:23.469302 kernel: hpet: HPET dysfunctional in PC10. Force disabled. May 13 23:58:23.469322 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:58:23.469327 kernel: DMAR: Host address width 39 May 13 23:58:23.469333 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 May 13 23:58:23.469338 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da May 13 23:58:23.469344 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff May 13 23:58:23.469349 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 May 13 23:58:23.469355 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 May 13 23:58:23.469361 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. May 13 23:58:23.469367 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode May 13 23:58:23.469372 kernel: x2apic enabled May 13 23:58:23.469377 kernel: APIC: Switched APIC routing to: cluster x2apic May 13 23:58:23.469383 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns May 13 23:58:23.469388 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) May 13 23:58:23.469394 kernel: CPU0: Thermal monitoring enabled (TM1) May 13 23:58:23.469399 kernel: process: using mwait in idle threads May 13 23:58:23.469405 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 23:58:23.469411 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 23:58:23.469416 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:58:23.469421 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 13 23:58:23.469427 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 13 23:58:23.469432 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 13 23:58:23.469437 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 13 23:58:23.469442 kernel: RETBleed: Mitigation: Enhanced IBRS May 13 23:58:23.469448 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:58:23.469453 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:58:23.469458 kernel: TAA: Mitigation: TSX disabled May 13 23:58:23.469463 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers May 13 23:58:23.469470 kernel: SRBDS: Mitigation: Microcode May 13 23:58:23.469475 kernel: GDS: Mitigation: Microcode May 13 23:58:23.469480 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:58:23.469486 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:58:23.469491 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:58:23.469496 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 13 23:58:23.469501 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 13 23:58:23.469506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:58:23.469512 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 13 23:58:23.469517 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 13 23:58:23.469522 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. May 13 23:58:23.469529 kernel: Freeing SMP alternatives memory: 32K May 13 23:58:23.469534 kernel: pid_max: default: 32768 minimum: 301 May 13 23:58:23.469539 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:58:23.469545 kernel: landlock: Up and running. May 13 23:58:23.469550 kernel: SELinux: Initializing. May 13 23:58:23.469555 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:58:23.469560 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:58:23.469566 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 13 23:58:23.469571 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 13 23:58:23.469577 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 13 23:58:23.469582 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 13 23:58:23.469589 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. May 13 23:58:23.469594 kernel: ... version: 4 May 13 23:58:23.469599 kernel: ... bit width: 48 May 13 23:58:23.469605 kernel: ... generic registers: 4 May 13 23:58:23.469610 kernel: ... value mask: 0000ffffffffffff May 13 23:58:23.469615 kernel: ... max period: 00007fffffffffff May 13 23:58:23.469621 kernel: ... fixed-purpose events: 3 May 13 23:58:23.469626 kernel: ... event mask: 000000070000000f May 13 23:58:23.469631 kernel: signal: max sigframe size: 2032 May 13 23:58:23.469638 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 May 13 23:58:23.469643 kernel: rcu: Hierarchical SRCU implementation. May 13 23:58:23.469648 kernel: rcu: Max phase no-delay instances is 400. May 13 23:58:23.469654 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. May 13 23:58:23.469659 kernel: smp: Bringing up secondary CPUs ... May 13 23:58:23.469665 kernel: smpboot: x86: Booting SMP configuration: May 13 23:58:23.469670 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 May 13 23:58:23.469676 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 13 23:58:23.469682 kernel: smp: Brought up 1 node, 16 CPUs May 13 23:58:23.469687 kernel: smpboot: Max logical packages: 1 May 13 23:58:23.469693 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) May 13 23:58:23.469698 kernel: devtmpfs: initialized May 13 23:58:23.469704 kernel: x86/mm: Memory block size: 128MB May 13 23:58:23.469709 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819ca000-0x819cafff] (4096 bytes) May 13 23:58:23.469715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) May 13 23:58:23.469720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:58:23.469725 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 13 23:58:23.469732 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:58:23.469737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:58:23.469742 kernel: audit: initializing netlink subsys (disabled) May 13 23:58:23.469748 kernel: audit: type=2000 audit(1747180698.041:1): state=initialized audit_enabled=0 res=1 May 13 23:58:23.469753 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:58:23.469758 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:58:23.469763 kernel: cpuidle: using governor menu May 13 23:58:23.469769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:58:23.469774 kernel: dca service started, version 1.12.1 May 13 23:58:23.469780 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 13 23:58:23.469786 kernel: PCI: Using configuration type 1 for base access May 13 23:58:23.469791 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' May 13 23:58:23.469797 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:58:23.469802 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:58:23.469807 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:58:23.469813 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:58:23.469818 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:58:23.469823 kernel: ACPI: Added _OSI(Module Device) May 13 23:58:23.469830 kernel: ACPI: Added _OSI(Processor Device) May 13 23:58:23.469835 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:58:23.469840 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:58:23.469846 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded May 13 23:58:23.469851 kernel: ACPI: Dynamic OEM Table Load: May 13 23:58:23.469856 kernel: ACPI: SSDT 0xFFFF973D00E5E800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) May 13 23:58:23.469862 kernel: ACPI: Dynamic OEM Table Load: May 13 23:58:23.469867 kernel: ACPI: SSDT 0xFFFF973D01E2F000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) May 13 23:58:23.469872 kernel: ACPI: Dynamic OEM Table Load: May 13 23:58:23.469879 kernel: ACPI: SSDT 0xFFFF973D00E04D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) May 13 23:58:23.469884 kernel: ACPI: Dynamic OEM Table Load: May 13 23:58:23.469889 kernel: ACPI: SSDT 0xFFFF973D01E2D000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) May 13 23:58:23.469895 kernel: ACPI: Dynamic OEM Table Load: May 13 23:58:23.469900 kernel: ACPI: SSDT 0xFFFF973D00E76000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) May 13 23:58:23.469905 kernel: ACPI: Dynamic OEM Table Load: May 13 23:58:23.469910 kernel: ACPI: SSDT 0xFFFF973D0156E000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) May 13 23:58:23.469916 kernel: ACPI: _OSC evaluated successfully for all CPUs May 13 23:58:23.469921 kernel: ACPI: Interpreter enabled May 13 23:58:23.469926 kernel: ACPI: PM: (supports S0 S5) May 13 23:58:23.469933 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:58:23.469938 kernel: HEST: Enabling Firmware First mode for corrected errors. May 13 23:58:23.469943 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. May 13 23:58:23.469949 kernel: HEST: Table parsing has been initialized. May 13 23:58:23.469954 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. May 13 23:58:23.469960 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:58:23.469965 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:58:23.469970 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F May 13 23:58:23.469976 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource May 13 23:58:23.469982 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource May 13 23:58:23.469988 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource May 13 23:58:23.469993 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource May 13 23:58:23.469998 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource May 13 23:58:23.470004 kernel: ACPI: \_TZ_.FN00: New power resource May 13 23:58:23.470009 kernel: ACPI: \_TZ_.FN01: New power resource May 13 23:58:23.470014 kernel: ACPI: \_TZ_.FN02: New power resource May 13 23:58:23.470020 kernel: ACPI: \_TZ_.FN03: New power resource May 13 23:58:23.470025 kernel: ACPI: \_TZ_.FN04: New power resource May 13 23:58:23.470031 kernel: ACPI: \PIN_: New power resource May 13 23:58:23.470037 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) May 13 23:58:23.470109 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:58:23.470159 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] May 13 23:58:23.470207 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] May 13 23:58:23.470215 kernel: PCI host bridge to bus 0000:00 May 13 23:58:23.470267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:58:23.470351 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:58:23.470394 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:58:23.470435 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] May 13 23:58:23.470478 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] May 13 23:58:23.470519 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] May 13 23:58:23.470581 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 May 13 23:58:23.470642 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 May 13 23:58:23.470693 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold May 13 23:58:23.470746 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 May 13 23:58:23.470794 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] May 13 23:58:23.470847 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 May 13 23:58:23.470896 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] May 13 23:58:23.470950 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 May 13 23:58:23.470997 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] May 13 23:58:23.471045 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold May 13 23:58:23.471097 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 May 13 23:58:23.471144 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] May 13 23:58:23.471193 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] May 13 23:58:23.471243 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 May 13 23:58:23.471336 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 13 23:58:23.471391 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 May 13 23:58:23.471440 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 13 23:58:23.471492 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 May 13 23:58:23.471539 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] May 13 23:58:23.471587 kernel: pci 0000:00:16.0: PME# supported from D3hot May 13 23:58:23.471641 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 May 13 23:58:23.471691 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] May 13 23:58:23.471746 kernel: pci 0000:00:16.1: PME# supported from D3hot May 13 23:58:23.471799 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 May 13 23:58:23.471849 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] May 13 23:58:23.471897 kernel: pci 0000:00:16.4: PME# supported from D3hot May 13 23:58:23.471950 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 May 13 23:58:23.471999 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] May 13 23:58:23.472047 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] May 13 23:58:23.472094 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] May 13 23:58:23.472141 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] May 13 23:58:23.472188 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] May 13 23:58:23.472236 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] May 13 23:58:23.472325 kernel: pci 0000:00:17.0: PME# supported from D3hot May 13 23:58:23.472380 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 May 13 23:58:23.472432 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold May 13 23:58:23.472486 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 May 13 23:58:23.472535 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold May 13 23:58:23.472589 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 May 13 23:58:23.472638 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold May 13 23:58:23.472689 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 May 13 23:58:23.472738 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold May 13 23:58:23.472792 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 May 13 23:58:23.472842 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold May 13 23:58:23.472895 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 May 13 23:58:23.472942 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] May 13 23:58:23.472995 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 May 13 23:58:23.473049 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 May 13 23:58:23.473098 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] May 13 23:58:23.473146 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] May 13 23:58:23.473200 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 May 13 23:58:23.473252 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] May 13 23:58:23.473308 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 May 13 23:58:23.473359 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] May 13 23:58:23.473408 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] May 13 23:58:23.473458 kernel: pci 0000:01:00.0: PME# supported from D3cold May 13 23:58:23.473509 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 13 23:58:23.473557 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 13 23:58:23.473613 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 May 13 23:58:23.473663 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] May 13 23:58:23.473713 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] May 13 23:58:23.473762 kernel: pci 0000:01:00.1: PME# supported from D3cold May 13 23:58:23.473811 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] May 13 23:58:23.473862 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) May 13 23:58:23.473911 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 23:58:23.473960 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 13 23:58:23.474007 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 13 23:58:23.474056 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 13 23:58:23.474110 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect May 13 23:58:23.474159 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 May 13 23:58:23.474209 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] May 13 23:58:23.474263 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] May 13 23:58:23.474351 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] May 13 23:58:23.474401 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 13 23:58:23.474449 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 13 23:58:23.474499 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 13 23:58:23.474546 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 13 23:58:23.474600 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect May 13 23:58:23.474652 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 May 13 23:58:23.474702 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] May 13 23:58:23.474751 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] May 13 23:58:23.474799 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] May 13 23:58:23.474849 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold May 13 23:58:23.474898 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 13 23:58:23.474947 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 13 23:58:23.474998 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 13 23:58:23.475046 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 13 23:58:23.475104 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 May 13 23:58:23.475154 kernel: pci 0000:06:00.0: enabling Extended Tags May 13 23:58:23.475204 kernel: pci 0000:06:00.0: supports D1 D2 May 13 23:58:23.475256 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 23:58:23.475307 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 13 23:58:23.475357 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 13 23:58:23.475407 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 13 23:58:23.475459 kernel: pci_bus 0000:07: extended config space not accessible May 13 23:58:23.475517 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 May 13 23:58:23.475570 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] May 13 23:58:23.475623 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] May 13 23:58:23.475674 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] May 13 23:58:23.475727 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:58:23.475780 kernel: pci 0000:07:00.0: supports D1 D2 May 13 23:58:23.475832 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 23:58:23.475883 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 13 23:58:23.475932 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 13 23:58:23.475981 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 13 23:58:23.475990 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 May 13 23:58:23.475996 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 May 13 23:58:23.476002 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 May 13 23:58:23.476009 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 May 13 23:58:23.476015 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 May 13 23:58:23.476021 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 May 13 23:58:23.476026 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 May 13 23:58:23.476032 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 May 13 23:58:23.476038 kernel: iommu: Default domain type: Translated May 13 23:58:23.476043 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:58:23.476049 kernel: PCI: Using ACPI for IRQ routing May 13 23:58:23.476055 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:58:23.476061 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] May 13 23:58:23.476067 kernel: e820: reserve RAM buffer [mem 0x819ca000-0x83ffffff] May 13 23:58:23.476073 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] May 13 23:58:23.476078 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] May 13 23:58:23.476084 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] May 13 23:58:23.476089 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] May 13 23:58:23.476140 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device May 13 23:58:23.476191 kernel: pci 0000:07:00.0: vgaarb: bridge control possible May 13 23:58:23.476243 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:58:23.476256 kernel: vgaarb: loaded May 13 23:58:23.476277 kernel: clocksource: Switched to clocksource tsc-early May 13 23:58:23.476283 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:58:23.476289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:58:23.476295 kernel: pnp: PnP ACPI init May 13 23:58:23.476362 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved May 13 23:58:23.476413 kernel: pnp 00:02: [dma 0 disabled] May 13 23:58:23.476463 kernel: pnp 00:03: [dma 0 disabled] May 13 23:58:23.476514 kernel: system 00:04: [io 0x0680-0x069f] has been reserved May 13 23:58:23.476559 kernel: system 00:04: [io 0x164e-0x164f] has been reserved May 13 23:58:23.476606 kernel: system 00:05: [io 0x1854-0x1857] has been reserved May 13 23:58:23.476654 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved May 13 23:58:23.476697 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved May 13 23:58:23.476744 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved May 13 23:58:23.476787 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved May 13 23:58:23.476831 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved May 13 23:58:23.476875 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved May 13 23:58:23.476919 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved May 13 23:58:23.476964 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved May 13 23:58:23.477010 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved May 13 23:58:23.477057 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved May 13 23:58:23.477100 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved May 13 23:58:23.477144 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved May 13 23:58:23.477187 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved May 13 23:58:23.477231 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved May 13 23:58:23.477313 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved May 13 23:58:23.477361 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved May 13 23:58:23.477372 kernel: pnp: PnP ACPI: found 10 devices May 13 23:58:23.477378 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:58:23.477384 kernel: NET: Registered PF_INET protocol family May 13 23:58:23.477390 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:58:23.477395 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) May 13 23:58:23.477402 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:58:23.477408 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:58:23.477414 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 13 23:58:23.477420 kernel: TCP: Hash tables configured (established 262144 bind 65536) May 13 23:58:23.477427 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 23:58:23.477433 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 23:58:23.477438 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:58:23.477444 kernel: NET: Registered PF_XDP protocol family May 13 23:58:23.477493 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] May 13 23:58:23.477542 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] May 13 23:58:23.477592 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] May 13 23:58:23.477641 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] May 13 23:58:23.477694 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 13 23:58:23.477745 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] May 13 23:58:23.477794 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] May 13 23:58:23.477843 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 23:58:23.477891 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] May 13 23:58:23.477939 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] May 13 23:58:23.477987 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] May 13 23:58:23.478038 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] May 13 23:58:23.478086 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] May 13 23:58:23.478133 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] May 13 23:58:23.478181 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] May 13 23:58:23.478229 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] May 13 23:58:23.478299 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] May 13 23:58:23.478366 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] May 13 23:58:23.478416 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] May 13 23:58:23.478466 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] May 13 23:58:23.478515 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] May 13 23:58:23.478563 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] May 13 23:58:23.478611 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] May 13 23:58:23.478659 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] May 13 23:58:23.478703 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc May 13 23:58:23.478746 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:58:23.478790 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:58:23.478833 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:58:23.478876 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] May 13 23:58:23.478917 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] May 13 23:58:23.478966 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] May 13 23:58:23.479011 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] May 13 23:58:23.479062 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] May 13 23:58:23.479109 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] May 13 23:58:23.479157 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 13 23:58:23.479201 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] May 13 23:58:23.479253 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] May 13 23:58:23.479334 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] May 13 23:58:23.479380 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] May 13 23:58:23.479430 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] May 13 23:58:23.479438 kernel: PCI: CLS 64 bytes, default 64 May 13 23:58:23.479444 kernel: DMAR: No ATSR found May 13 23:58:23.479450 kernel: DMAR: No SATC found May 13 23:58:23.479455 kernel: DMAR: dmar0: Using Queued invalidation May 13 23:58:23.479503 kernel: pci 0000:00:00.0: Adding to iommu group 0 May 13 23:58:23.479553 kernel: pci 0000:00:01.0: Adding to iommu group 1 May 13 23:58:23.479601 kernel: pci 0000:00:08.0: Adding to iommu group 2 May 13 23:58:23.479651 kernel: pci 0000:00:12.0: Adding to iommu group 3 May 13 23:58:23.479702 kernel: pci 0000:00:14.0: Adding to iommu group 4 May 13 23:58:23.479749 kernel: pci 0000:00:14.2: Adding to iommu group 4 May 13 23:58:23.479797 kernel: pci 0000:00:15.0: Adding to iommu group 5 May 13 23:58:23.479844 kernel: pci 0000:00:15.1: Adding to iommu group 5 May 13 23:58:23.479893 kernel: pci 0000:00:16.0: Adding to iommu group 6 May 13 23:58:23.479940 kernel: pci 0000:00:16.1: Adding to iommu group 6 May 13 23:58:23.479988 kernel: pci 0000:00:16.4: Adding to iommu group 6 May 13 23:58:23.480036 kernel: pci 0000:00:17.0: Adding to iommu group 7 May 13 23:58:23.480086 kernel: pci 0000:00:1b.0: Adding to iommu group 8 May 13 23:58:23.480136 kernel: pci 0000:00:1b.4: Adding to iommu group 9 May 13 23:58:23.480184 kernel: pci 0000:00:1b.5: Adding to iommu group 10 May 13 23:58:23.480232 kernel: pci 0000:00:1c.0: Adding to iommu group 11 May 13 23:58:23.480323 kernel: pci 0000:00:1c.3: Adding to iommu group 12 May 13 23:58:23.480372 kernel: pci 0000:00:1e.0: Adding to iommu group 13 May 13 23:58:23.480420 kernel: pci 0000:00:1f.0: Adding to iommu group 14 May 13 23:58:23.480468 kernel: pci 0000:00:1f.4: Adding to iommu group 14 May 13 23:58:23.480518 kernel: pci 0000:00:1f.5: Adding to iommu group 14 May 13 23:58:23.480569 kernel: pci 0000:01:00.0: Adding to iommu group 1 May 13 23:58:23.480619 kernel: pci 0000:01:00.1: Adding to iommu group 1 May 13 23:58:23.480667 kernel: pci 0000:03:00.0: Adding to iommu group 15 May 13 23:58:23.480718 kernel: pci 0000:04:00.0: Adding to iommu group 16 May 13 23:58:23.480768 kernel: pci 0000:06:00.0: Adding to iommu group 17 May 13 23:58:23.480820 kernel: pci 0000:07:00.0: Adding to iommu group 17 May 13 23:58:23.480828 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O May 13 23:58:23.480836 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 23:58:23.480842 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) May 13 23:58:23.480848 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer May 13 23:58:23.480853 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules May 13 23:58:23.480859 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules May 13 23:58:23.480865 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules May 13 23:58:23.480917 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) May 13 23:58:23.480926 kernel: Initialise system trusted keyrings May 13 23:58:23.480934 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 May 13 23:58:23.480939 kernel: Key type asymmetric registered May 13 23:58:23.480945 kernel: Asymmetric key parser 'x509' registered May 13 23:58:23.480951 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:58:23.480957 kernel: io scheduler mq-deadline registered May 13 23:58:23.480962 kernel: io scheduler kyber registered May 13 23:58:23.480968 kernel: io scheduler bfq registered May 13 23:58:23.481016 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 May 13 23:58:23.481065 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 May 13 23:58:23.481116 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 May 13 23:58:23.481165 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 May 13 23:58:23.481214 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 May 13 23:58:23.481286 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 May 13 23:58:23.481359 kernel: thermal LNXTHERM:00: registered as thermal_zone0 May 13 23:58:23.481368 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) May 13 23:58:23.481374 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. May 13 23:58:23.481379 kernel: pstore: Using crash dump compression: deflate May 13 23:58:23.481387 kernel: pstore: Registered erst as persistent store backend May 13 23:58:23.481393 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:58:23.481399 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:58:23.481404 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:58:23.481410 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 13 23:58:23.481416 kernel: hpet_acpi_add: no address or irqs in _CRS May 13 23:58:23.481464 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) May 13 23:58:23.481472 kernel: i8042: PNP: No PS/2 controller found. May 13 23:58:23.481519 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 May 13 23:58:23.481564 kernel: rtc_cmos rtc_cmos: registered as rtc0 May 13 23:58:23.481609 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-05-13T23:58:22 UTC (1747180702) May 13 23:58:23.481653 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram May 13 23:58:23.481661 kernel: intel_pstate: Intel P-state driver initializing May 13 23:58:23.481667 kernel: intel_pstate: Disabling energy efficiency optimization May 13 23:58:23.481673 kernel: intel_pstate: HWP enabled May 13 23:58:23.481679 kernel: NET: Registered PF_INET6 protocol family May 13 23:58:23.481686 kernel: Segment Routing with IPv6 May 13 23:58:23.481692 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:58:23.481698 kernel: NET: Registered PF_PACKET protocol family May 13 23:58:23.481703 kernel: Key type dns_resolver registered May 13 23:58:23.481709 kernel: microcode: Current revision: 0x00000102 May 13 23:58:23.481715 kernel: microcode: Updated early from: 0x000000f4 May 13 23:58:23.481721 kernel: microcode: Microcode Update Driver: v2.2. May 13 23:58:23.481726 kernel: IPI shorthand broadcast: enabled May 13 23:58:23.481732 kernel: sched_clock: Marking stable (2504000686, 1442031026)->(4500555965, -554524253) May 13 23:58:23.481739 kernel: registered taskstats version 1 May 13 23:58:23.481745 kernel: Loading compiled-in X.509 certificates May 13 23:58:23.481750 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:58:23.481756 kernel: Key type .fscrypt registered May 13 23:58:23.481762 kernel: Key type fscrypt-provisioning registered May 13 23:58:23.481768 kernel: ima: Allocated hash algorithm: sha1 May 13 23:58:23.481773 kernel: ima: No architecture policies found May 13 23:58:23.481779 kernel: clk: Disabling unused clocks May 13 23:58:23.481785 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:58:23.481792 kernel: Write protecting the kernel read-only data: 40960k May 13 23:58:23.481797 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:58:23.481803 kernel: Run /init as init process May 13 23:58:23.481809 kernel: with arguments: May 13 23:58:23.481814 kernel: /init May 13 23:58:23.481820 kernel: with environment: May 13 23:58:23.481825 kernel: HOME=/ May 13 23:58:23.481831 kernel: TERM=linux May 13 23:58:23.481837 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:58:23.481844 systemd[1]: Successfully made /usr/ read-only. May 13 23:58:23.481852 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:58:23.481858 systemd[1]: Detected architecture x86-64. May 13 23:58:23.481864 systemd[1]: Running in initrd. May 13 23:58:23.481870 systemd[1]: No hostname configured, using default hostname. May 13 23:58:23.481876 systemd[1]: Hostname set to . May 13 23:58:23.481882 systemd[1]: Initializing machine ID from random generator. May 13 23:58:23.481889 systemd[1]: Queued start job for default target initrd.target. May 13 23:58:23.481895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:58:23.481901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:58:23.481907 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:58:23.481913 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:58:23.481919 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:58:23.481925 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:58:23.481933 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:58:23.481939 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:58:23.481945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:58:23.481951 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:58:23.481957 systemd[1]: Reached target paths.target - Path Units. May 13 23:58:23.481963 systemd[1]: Reached target slices.target - Slice Units. May 13 23:58:23.481969 systemd[1]: Reached target swap.target - Swaps. May 13 23:58:23.481975 systemd[1]: Reached target timers.target - Timer Units. May 13 23:58:23.481982 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:58:23.481988 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:58:23.481994 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:58:23.482000 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:58:23.482006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:58:23.482012 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:58:23.482018 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:58:23.482024 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:58:23.482030 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:58:23.482037 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz May 13 23:58:23.482043 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns May 13 23:58:23.482049 kernel: clocksource: Switched to clocksource tsc May 13 23:58:23.482055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:58:23.482061 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:58:23.482067 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:58:23.482073 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:58:23.482090 systemd-journald[266]: Collecting audit messages is disabled. May 13 23:58:23.482105 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:58:23.482111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:23.482117 systemd-journald[266]: Journal started May 13 23:58:23.482132 systemd-journald[266]: Runtime Journal (/run/log/journal/dfe6728ff99f47a1ac17248395ad729f) is 8M, max 639.8M, 631.8M free. May 13 23:58:23.517258 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:58:23.517371 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:58:23.517634 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:58:23.517767 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:58:23.518809 systemd-modules-load[268]: Inserted module 'overlay' May 13 23:58:23.518943 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:58:23.519280 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:58:23.535257 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:58:23.535979 systemd-modules-load[268]: Inserted module 'br_netfilter' May 13 23:58:23.560973 kernel: Bridge firewalling registered May 13 23:58:23.536697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:58:23.663170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:23.675845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:58:23.698834 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:58:23.726462 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:58:23.734915 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:23.756114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:58:23.784291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:58:23.793944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:23.819078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:58:23.850724 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:23.852927 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:58:23.899302 systemd-resolved[300]: Positive Trust Anchors: May 13 23:58:23.899312 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:58:23.928352 dracut-cmdline[310]: dracut-dracut-053 May 13 23:58:23.928352 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:58:24.003347 kernel: SCSI subsystem initialized May 13 23:58:24.003364 kernel: Loading iSCSI transport class v2.0-870. May 13 23:58:24.003372 kernel: iscsi: registered transport (tcp) May 13 23:58:23.899351 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:58:24.063601 kernel: iscsi: registered transport (qla4xxx) May 13 23:58:24.063621 kernel: QLogic iSCSI HBA Driver May 13 23:58:23.901786 systemd-resolved[300]: Defaulting to hostname 'linux'. May 13 23:58:23.902591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:58:23.916400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:58:24.041259 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:58:24.074836 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:58:24.160651 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:58:24.160671 kernel: device-mapper: uevent: version 1.0.3 May 13 23:58:24.169432 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:58:24.205313 kernel: raid6: avx2x4 gen() 47172 MB/s May 13 23:58:24.226283 kernel: raid6: avx2x2 gen() 53727 MB/s May 13 23:58:24.252376 kernel: raid6: avx2x1 gen() 45020 MB/s May 13 23:58:24.252393 kernel: raid6: using algorithm avx2x2 gen() 53727 MB/s May 13 23:58:24.279478 kernel: raid6: .... xor() 32417 MB/s, rmw enabled May 13 23:58:24.279496 kernel: raid6: using avx2x2 recovery algorithm May 13 23:58:24.300257 kernel: xor: automatically using best checksumming function avx May 13 23:58:24.398260 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:58:24.403922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:58:24.404885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:58:24.469697 systemd-udevd[495]: Using default interface naming scheme 'v255'. May 13 23:58:24.474092 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:58:24.491127 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:58:24.542929 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation May 13 23:58:24.559842 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:58:24.573833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:58:24.717033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:58:24.742369 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:58:24.742425 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:58:24.743256 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:58:24.763093 kernel: ACPI: bus type USB registered May 13 23:58:24.763115 kernel: usbcore: registered new interface driver usbfs May 13 23:58:24.763654 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:58:24.819879 kernel: usbcore: registered new interface driver hub May 13 23:58:24.819899 kernel: usbcore: registered new device driver usb May 13 23:58:24.819909 kernel: PTP clock support registered May 13 23:58:24.819918 kernel: libata version 3.00 loaded. May 13 23:58:24.819932 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:58:24.819942 kernel: AES CTR mode by8 optimization enabled May 13 23:58:24.819951 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 13 23:58:24.819960 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 13 23:58:24.819969 kernel: ahci 0000:00:17.0: version 3.0 May 13 23:58:24.820080 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 13 23:58:24.820161 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode May 13 23:58:24.820236 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 May 13 23:58:24.791020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:58:25.034912 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst May 13 23:58:25.035032 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 May 13 23:58:25.035153 kernel: igb 0000:03:00.0: added PHC on eth0 May 13 23:58:25.035273 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller May 13 23:58:25.035387 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 13 23:58:25.035515 kernel: scsi host0: ahci May 13 23:58:25.035608 kernel: scsi host1: ahci May 13 23:58:25.035675 kernel: scsi host2: ahci May 13 23:58:25.035744 kernel: scsi host3: ahci May 13 23:58:25.035808 kernel: scsi host4: ahci May 13 23:58:25.035867 kernel: scsi host5: ahci May 13 23:58:25.035926 kernel: scsi host6: ahci May 13 23:58:25.035986 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 May 13 23:58:25.035995 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 May 13 23:58:25.036003 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 May 13 23:58:25.036010 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 May 13 23:58:25.036017 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 May 13 23:58:25.036024 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 May 13 23:58:25.036031 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 May 13 23:58:25.036038 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 May 13 23:58:25.036106 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:31:6c May 13 23:58:25.036177 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed May 13 23:58:25.036243 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 May 13 23:58:25.036325 kernel: hub 1-0:1.0: USB hub found May 13 23:58:25.036395 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 13 23:58:25.036463 kernel: igb 0000:04:00.0: added PHC on eth1 May 13 23:58:25.036532 kernel: hub 1-0:1.0: 16 ports detected May 13 23:58:25.036594 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection May 13 23:58:25.036662 kernel: hub 2-0:1.0: USB hub found May 13 23:58:25.036733 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:31:6d May 13 23:58:25.036801 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 May 13 23:58:25.036869 kernel: hub 2-0:1.0: 10 ports detected May 13 23:58:25.036930 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) May 13 23:58:24.791110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:25.067429 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 May 13 23:58:25.067518 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 13 23:58:25.067409 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:58:25.067487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:58:25.067584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:25.097396 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:25.127827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:25.147552 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:25.187467 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 13 23:58:25.187482 kernel: ata7: SATA link down (SStatus 0 SControl 300) May 13 23:58:25.157486 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:58:25.213910 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 23:58:25.213924 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 23:58:25.213934 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 13 23:58:25.213943 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 23:58:25.213952 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) May 13 23:58:25.183360 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:58:25.237422 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 13 23:58:25.237434 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 May 13 23:58:25.213311 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:58:25.236761 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:58:25.237905 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:58:25.241313 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 13 23:58:25.242326 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA May 13 23:58:25.248324 kernel: ata1.00: Features: NCQ-prio May 13 23:58:25.249331 kernel: ata2.00: Features: NCQ-prio May 13 23:58:25.252320 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd May 13 23:58:25.298301 kernel: ata1.00: configured for UDMA/133 May 13 23:58:25.298318 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 13 23:58:25.307319 kernel: ata2.00: configured for UDMA/133 May 13 23:58:25.312326 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 May 13 23:58:25.328328 kernel: igb 0000:03:00.0 eno1: renamed from eth0 May 13 23:58:25.328440 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) May 13 23:58:25.350076 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged May 13 23:58:25.359256 kernel: igb 0000:04:00.0 eno2: renamed from eth1 May 13 23:58:25.365371 kernel: ata1.00: Enabling discard_zeroes_data May 13 23:58:25.365390 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 13 23:58:25.378964 kernel: ata2.00: Enabling discard_zeroes_data May 13 23:58:25.378985 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 13 23:58:25.379064 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) May 13 23:58:25.380256 kernel: hub 1-14:1.0: USB hub found May 13 23:58:25.380414 kernel: hub 1-14:1.0: 4 ports detected May 13 23:58:25.391668 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 23:58:25.401047 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks May 13 23:58:25.401133 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 May 13 23:58:25.401204 kernel: sd 1:0:0:0: [sdb] Write Protect is off May 13 23:58:25.405822 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 13 23:58:25.411372 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 May 13 23:58:25.411458 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes May 13 23:58:25.426319 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 13 23:58:25.450268 kernel: ata1.00: Enabling discard_zeroes_data May 13 23:58:25.463315 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes May 13 23:58:25.463403 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 23:58:25.463470 kernel: ata2.00: Enabling discard_zeroes_data May 13 23:58:25.484912 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:58:25.484929 kernel: GPT:9289727 != 937703087 May 13 23:58:25.492567 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:58:25.497804 kernel: GPT:9289727 != 937703087 May 13 23:58:25.504595 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:58:25.511214 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 13 23:58:25.512299 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk May 13 23:58:25.537405 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:25.576452 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/sdb3 scanned by (udev-worker) (553) May 13 23:58:25.576468 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by (udev-worker) (550) May 13 23:58:25.566849 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:58:25.609659 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. May 13 23:58:25.655352 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 13 23:58:25.655450 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 May 13 23:58:25.655526 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) May 13 23:58:25.650137 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. May 13 23:58:25.684328 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd May 13 23:58:25.683885 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. May 13 23:58:25.695321 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. May 13 23:58:25.702214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 13 23:58:25.742770 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:58:25.764785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:58:25.832339 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:58:25.832354 kernel: usbcore: registered new interface driver usbhid May 13 23:58:25.832362 kernel: usbhid: USB HID core driver May 13 23:58:25.832369 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 May 13 23:58:25.832380 kernel: ata2.00: Enabling discard_zeroes_data May 13 23:58:25.832387 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 13 23:58:25.832446 disk-uuid[701]: Primary Header is updated. May 13 23:58:25.832446 disk-uuid[701]: Secondary Entries is updated. May 13 23:58:25.832446 disk-uuid[701]: Secondary Header is updated. May 13 23:58:25.886885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:25.957904 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 May 13 23:58:25.958011 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 May 13 23:58:25.958021 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 May 13 23:58:25.958095 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) May 13 23:58:25.958171 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged May 13 23:58:26.214339 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 13 23:58:26.229452 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 May 13 23:58:26.243446 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 May 13 23:58:26.830639 kernel: ata2.00: Enabling discard_zeroes_data May 13 23:58:26.839126 disk-uuid[702]: The operation has completed successfully. May 13 23:58:26.848393 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 May 13 23:58:26.879505 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:58:26.879552 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:58:26.917370 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:58:26.945334 sh[736]: Success May 13 23:58:26.960310 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:58:27.003140 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:58:27.013624 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:58:27.040983 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:58:27.102286 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:58:27.102301 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:27.102309 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:58:27.102317 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:58:27.102324 kernel: BTRFS info (device dm-0): using free space tree May 13 23:58:27.111254 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 23:58:27.112804 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:58:27.121679 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:58:27.122124 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:58:27.155112 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:58:27.213310 kernel: BTRFS info (device sdb6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:27.213370 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:27.219246 kernel: BTRFS info (device sdb6): using free space tree May 13 23:58:27.234319 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 13 23:58:27.234350 kernel: BTRFS info (device sdb6): auto enabling async discard May 13 23:58:27.238700 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:58:27.271666 kernel: BTRFS info (device sdb6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:27.259996 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:58:27.286218 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:58:27.303810 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:58:27.346044 systemd-networkd[916]: lo: Link UP May 13 23:58:27.346047 systemd-networkd[916]: lo: Gained carrier May 13 23:58:27.348598 systemd-networkd[916]: Enumeration completed May 13 23:58:27.348643 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:58:27.349366 systemd-networkd[916]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:27.392044 ignition[915]: Ignition 2.20.0 May 13 23:58:27.363373 systemd[1]: Reached target network.target - Network. May 13 23:58:27.392049 ignition[915]: Stage: fetch-offline May 13 23:58:27.377016 systemd-networkd[916]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:27.392067 ignition[915]: no configs at "/usr/lib/ignition/base.d" May 13 23:58:27.394511 unknown[915]: fetched base config from "system" May 13 23:58:27.392073 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:58:27.394515 unknown[915]: fetched user config from "system" May 13 23:58:27.392126 ignition[915]: parsed url from cmdline: "" May 13 23:58:27.395470 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:58:27.392128 ignition[915]: no config URL provided May 13 23:58:27.405013 systemd-networkd[916]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:27.392131 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:58:27.408715 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:58:27.392153 ignition[915]: parsing config with SHA512: 7c15181a71949ae9c9086d69a5a97a09cbe5e51e83f890865818f02971e78ee2233be57417a51a2449dae905dee1c0ef5d52d8cf2edcea225caa8968010819e6 May 13 23:58:27.409227 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:58:27.394733 ignition[915]: fetch-offline: fetch-offline passed May 13 23:58:27.394735 ignition[915]: POST message to Packet Timeline May 13 23:58:27.394738 ignition[915]: POST Status error: resource requires networking May 13 23:58:27.394779 ignition[915]: Ignition finished successfully May 13 23:58:27.452255 ignition[929]: Ignition 2.20.0 May 13 23:58:27.452265 ignition[929]: Stage: kargs May 13 23:58:27.613373 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 13 23:58:27.610912 systemd-networkd[916]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:27.452481 ignition[929]: no configs at "/usr/lib/ignition/base.d" May 13 23:58:27.452500 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:58:27.453662 ignition[929]: kargs: kargs passed May 13 23:58:27.453668 ignition[929]: POST message to Packet Timeline May 13 23:58:27.453694 ignition[929]: GET https://metadata.packet.net/metadata: attempt #1 May 13 23:58:27.454532 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43778->[::1]:53: read: connection refused May 13 23:58:27.654847 ignition[929]: GET https://metadata.packet.net/metadata: attempt #2 May 13 23:58:27.656125 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44047->[::1]:53: read: connection refused May 13 23:58:27.874341 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 13 23:58:27.875176 systemd-networkd[916]: eno1: Link UP May 13 23:58:27.875332 systemd-networkd[916]: eno2: Link UP May 13 23:58:27.875476 systemd-networkd[916]: enp1s0f0np0: Link UP May 13 23:58:27.875648 systemd-networkd[916]: enp1s0f0np0: Gained carrier May 13 23:58:27.885550 systemd-networkd[916]: enp1s0f1np1: Link UP May 13 23:58:27.920445 systemd-networkd[916]: enp1s0f0np0: DHCPv4 address 145.40.90.165/31, gateway 145.40.90.164 acquired from 145.40.83.140 May 13 23:58:28.056379 ignition[929]: GET https://metadata.packet.net/metadata: attempt #3 May 13 23:58:28.057519 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47073->[::1]:53: read: connection refused May 13 23:58:28.674019 systemd-networkd[916]: enp1s0f1np1: Gained carrier May 13 23:58:28.857900 ignition[929]: GET https://metadata.packet.net/metadata: attempt #4 May 13 23:58:28.858989 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49649->[::1]:53: read: connection refused May 13 23:58:29.633862 systemd-networkd[916]: enp1s0f0np0: Gained IPv6LL May 13 23:58:30.460631 ignition[929]: GET https://metadata.packet.net/metadata: attempt #5 May 13 23:58:30.461950 ignition[929]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47896->[::1]:53: read: connection refused May 13 23:58:30.593863 systemd-networkd[916]: enp1s0f1np1: Gained IPv6LL May 13 23:58:33.665309 ignition[929]: GET https://metadata.packet.net/metadata: attempt #6 May 13 23:58:34.706344 ignition[929]: GET result: OK May 13 23:58:35.039420 ignition[929]: Ignition finished successfully May 13 23:58:35.044528 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:58:35.059627 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:58:35.124896 ignition[947]: Ignition 2.20.0 May 13 23:58:35.124920 ignition[947]: Stage: disks May 13 23:58:35.125156 ignition[947]: no configs at "/usr/lib/ignition/base.d" May 13 23:58:35.125162 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:58:35.125664 ignition[947]: disks: disks passed May 13 23:58:35.125666 ignition[947]: POST message to Packet Timeline May 13 23:58:35.125677 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 May 13 23:58:36.122764 ignition[947]: GET result: OK May 13 23:58:36.986088 ignition[947]: Ignition finished successfully May 13 23:58:36.989818 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:58:37.005465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:58:37.023508 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:58:37.046551 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:58:37.067579 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:58:37.087572 systemd[1]: Reached target basic.target - Basic System. May 13 23:58:37.109111 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:58:37.159955 systemd-fsck[967]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:58:37.169667 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:58:37.183958 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:58:37.286345 kernel: EXT4-fs (sdb9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:58:37.286595 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:58:37.300682 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:58:37.312553 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:58:37.331189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:58:37.347113 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:58:37.418364 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sdb6 scanned by mount (976) May 13 23:58:37.418379 kernel: BTRFS info (device sdb6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:37.418387 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:37.418394 kernel: BTRFS info (device sdb6): using free space tree May 13 23:58:37.418403 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 13 23:58:37.418418 kernel: BTRFS info (device sdb6): auto enabling async discard May 13 23:58:37.353942 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 13 23:58:37.429350 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:58:37.429369 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:58:37.487462 coreos-metadata[978]: May 13 23:58:37.467 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:58:37.449258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:58:37.529550 coreos-metadata[979]: May 13 23:58:37.468 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:58:37.476548 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:58:37.496505 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:58:37.562357 initrd-setup-root[1008]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:58:37.572369 initrd-setup-root[1015]: cut: /sysroot/etc/group: No such file or directory May 13 23:58:37.582452 initrd-setup-root[1022]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:58:37.593372 initrd-setup-root[1029]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:58:37.616540 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:58:37.627174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:58:37.637026 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:58:37.679332 kernel: BTRFS info (device sdb6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:37.665044 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:58:37.684901 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:58:37.695002 ignition[1098]: INFO : Ignition 2.20.0 May 13 23:58:37.695002 ignition[1098]: INFO : Stage: mount May 13 23:58:37.718349 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:58:37.718349 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:58:37.718349 ignition[1098]: INFO : mount: mount passed May 13 23:58:37.718349 ignition[1098]: INFO : POST message to Packet Timeline May 13 23:58:37.718349 ignition[1098]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 13 23:58:38.278655 coreos-metadata[978]: May 13 23:58:38.278 INFO Fetch successful May 13 23:58:38.355196 coreos-metadata[978]: May 13 23:58:38.355 INFO wrote hostname ci-4284.0.0-n-b3bb28caaa to /sysroot/etc/hostname May 13 23:58:38.356570 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:58:38.401483 coreos-metadata[979]: May 13 23:58:38.401 INFO Fetch successful May 13 23:58:38.436584 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 13 23:58:38.436638 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 13 23:58:38.712435 ignition[1098]: INFO : GET result: OK May 13 23:58:39.073862 ignition[1098]: INFO : Ignition finished successfully May 13 23:58:39.077469 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:58:39.095981 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:58:39.130764 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:58:39.182355 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by mount (1122) May 13 23:58:39.182384 kernel: BTRFS info (device sdb6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:39.190448 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:39.196343 kernel: BTRFS info (device sdb6): using free space tree May 13 23:58:39.211409 kernel: BTRFS info (device sdb6): enabling ssd optimizations May 13 23:58:39.211425 kernel: BTRFS info (device sdb6): auto enabling async discard May 13 23:58:39.213285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:58:39.244455 ignition[1139]: INFO : Ignition 2.20.0 May 13 23:58:39.244455 ignition[1139]: INFO : Stage: files May 13 23:58:39.259480 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:58:39.259480 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:58:39.259480 ignition[1139]: DEBUG : files: compiled without relabeling support, skipping May 13 23:58:39.259480 ignition[1139]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:58:39.259480 ignition[1139]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:58:39.259480 ignition[1139]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:58:39.259480 ignition[1139]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:58:39.259480 ignition[1139]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:58:39.259480 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:58:39.259480 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:58:39.249323 unknown[1139]: wrote ssh authorized keys file for user: core May 13 23:58:39.392330 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:58:39.426727 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:58:39.426727 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:39.459459 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 23:58:39.791163 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:58:40.059723 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:40.059723 ignition[1139]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:58:40.090485 ignition[1139]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:58:40.090485 ignition[1139]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:58:40.090485 ignition[1139]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:58:40.090485 ignition[1139]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:58:40.090485 ignition[1139]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:58:40.090485 ignition[1139]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:58:40.090485 ignition[1139]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:58:40.090485 ignition[1139]: INFO : files: files passed May 13 23:58:40.090485 ignition[1139]: INFO : POST message to Packet Timeline May 13 23:58:40.090485 ignition[1139]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 13 23:58:41.032989 ignition[1139]: INFO : GET result: OK May 13 23:58:41.399708 ignition[1139]: INFO : Ignition finished successfully May 13 23:58:41.402889 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:58:41.422846 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:58:41.437876 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:58:41.467696 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:58:41.467773 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:58:41.494690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:58:41.506828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:58:41.529499 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:58:41.560467 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:58:41.560467 initrd-setup-root-after-ignition[1180]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:58:41.574574 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:58:41.632146 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:58:41.632429 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:58:41.652688 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:58:41.673573 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:58:41.694758 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:58:41.697282 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:58:41.781428 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:58:41.796438 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:58:41.867803 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:58:41.879896 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:58:41.900994 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:58:41.918967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:58:41.919413 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:58:41.958747 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:58:41.968869 systemd[1]: Stopped target basic.target - Basic System. May 13 23:58:41.986992 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:58:42.005889 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:58:42.026976 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:58:42.047896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:58:42.067884 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:58:42.089937 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:58:42.110923 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:58:42.130984 systemd[1]: Stopped target swap.target - Swaps. May 13 23:58:42.148886 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:58:42.149325 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:58:42.173980 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:58:42.193526 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:58:42.214430 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:58:42.214637 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:58:42.236580 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:58:42.236806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:58:42.266848 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:58:42.267336 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:58:42.286045 systemd[1]: Stopped target paths.target - Path Units. May 13 23:58:42.303719 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:58:42.304157 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:58:42.324560 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:58:42.342554 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:58:42.360544 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:58:42.360629 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:58:42.380581 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:58:42.380658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:58:42.403611 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:58:42.403726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:58:42.422597 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:58:42.532710 ignition[1204]: INFO : Ignition 2.20.0 May 13 23:58:42.532710 ignition[1204]: INFO : Stage: umount May 13 23:58:42.532710 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:58:42.532710 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:58:42.532710 ignition[1204]: INFO : umount: umount passed May 13 23:58:42.532710 ignition[1204]: INFO : POST message to Packet Timeline May 13 23:58:42.532710 ignition[1204]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 13 23:58:42.422706 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:58:42.442593 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:58:42.442701 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:58:42.462287 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:58:42.469501 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:58:42.469605 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:58:42.505259 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:58:42.514546 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:58:42.514726 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:58:42.552015 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:58:42.552419 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:58:42.605904 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:58:42.608726 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:58:42.608975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:58:42.619946 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:58:42.620183 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:58:43.482985 ignition[1204]: INFO : GET result: OK May 13 23:58:43.894314 ignition[1204]: INFO : Ignition finished successfully May 13 23:58:43.896092 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:58:43.896236 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:58:43.914733 systemd[1]: Stopped target network.target - Network. May 13 23:58:43.929618 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:58:43.929813 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:58:43.947604 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:58:43.947751 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:58:43.965671 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:58:43.965843 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:58:43.983681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:58:43.983845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:58:44.001672 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:58:44.001850 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:58:44.021111 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:58:44.039754 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:58:44.059332 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:58:44.059607 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:58:44.081379 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:58:44.081478 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:58:44.081525 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:58:44.105261 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:58:44.105767 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:58:44.105803 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:58:44.127585 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:58:44.136575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:58:44.136662 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:58:44.164748 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:58:44.164922 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:44.184980 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:58:44.185139 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:58:44.202757 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:58:44.202929 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:58:44.225041 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:58:44.249834 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:58:44.250038 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:44.251129 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:58:44.251492 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:58:44.269147 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:58:44.269299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:58:44.275551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:58:44.275570 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:58:44.295565 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:58:44.295617 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:58:44.332607 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:58:44.332713 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:58:44.362752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:58:44.362935 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:44.395926 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:58:44.403449 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:58:44.403480 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:58:44.703492 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). May 13 23:58:44.439419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:58:44.439465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:44.461906 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:58:44.462026 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:44.462721 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:58:44.462868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:58:44.578121 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:58:44.578421 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:58:44.591458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:58:44.610551 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:58:44.662688 systemd[1]: Switching root. May 13 23:58:44.813328 systemd-journald[266]: Journal stopped May 13 23:58:46.511879 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:58:46.511895 kernel: SELinux: policy capability open_perms=1 May 13 23:58:46.511902 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:58:46.511907 kernel: SELinux: policy capability always_check_network=0 May 13 23:58:46.511914 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:58:46.511919 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:58:46.511926 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:58:46.511931 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:58:46.511936 kernel: audit: type=1403 audit(1747180724.912:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:58:46.511943 systemd[1]: Successfully loaded SELinux policy in 74.218ms. May 13 23:58:46.511951 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.934ms. May 13 23:58:46.511958 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:58:46.511964 systemd[1]: Detected architecture x86-64. May 13 23:58:46.511970 systemd[1]: Detected first boot. May 13 23:58:46.511977 systemd[1]: Hostname set to . May 13 23:58:46.511984 systemd[1]: Initializing machine ID from random generator. May 13 23:58:46.511991 zram_generator::config[1258]: No configuration found. May 13 23:58:46.511998 systemd[1]: Populated /etc with preset unit settings. May 13 23:58:46.512004 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:58:46.512011 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:58:46.512017 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:58:46.512023 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:58:46.512030 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:58:46.512037 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:58:46.512043 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:58:46.512050 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:58:46.512056 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:58:46.512063 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:58:46.512069 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:58:46.512077 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:58:46.512083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:58:46.512090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:58:46.512096 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:58:46.512102 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:58:46.512109 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:58:46.512117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:58:46.512123 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... May 13 23:58:46.512131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:58:46.512138 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:58:46.512144 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:58:46.512153 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:58:46.512159 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:58:46.512166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:58:46.512173 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:58:46.512179 systemd[1]: Reached target slices.target - Slice Units. May 13 23:58:46.512187 systemd[1]: Reached target swap.target - Swaps. May 13 23:58:46.512194 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:58:46.512200 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:58:46.512207 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:58:46.512213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:58:46.512220 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:58:46.512228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:58:46.512235 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:58:46.512242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:58:46.512259 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:58:46.512267 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:58:46.512274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:46.512281 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:58:46.512289 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:58:46.512296 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:58:46.512303 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:58:46.512310 systemd[1]: Reached target machines.target - Containers. May 13 23:58:46.512317 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:58:46.512323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:46.512330 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:58:46.512337 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:58:46.512345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:46.512352 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:58:46.512359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:46.512365 kernel: ACPI: bus type drm_connector registered May 13 23:58:46.512371 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:58:46.512378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:46.512385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:58:46.512392 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:58:46.512400 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:58:46.512406 kernel: loop: module loaded May 13 23:58:46.512412 kernel: fuse: init (API version 7.39) May 13 23:58:46.512420 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:58:46.512426 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:58:46.512434 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:46.512440 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:58:46.512447 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:58:46.512454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:58:46.512470 systemd-journald[1363]: Collecting audit messages is disabled. May 13 23:58:46.512487 systemd-journald[1363]: Journal started May 13 23:58:46.512502 systemd-journald[1363]: Runtime Journal (/run/log/journal/b7865dc903c84df8acf6efbda609998d) is 8M, max 639.8M, 631.8M free. May 13 23:58:45.346622 systemd[1]: Queued start job for default target multi-user.target. May 13 23:58:45.362311 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. May 13 23:58:45.363265 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:58:46.540317 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:58:46.562300 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:58:46.573448 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:58:46.604568 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:58:46.604602 systemd[1]: Stopped verity-setup.service. May 13 23:58:46.630315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:46.638301 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:58:46.647694 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:58:46.657380 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:58:46.667370 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:58:46.677549 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:58:46.687517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:58:46.698778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:58:46.710226 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:58:46.723227 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:58:46.736173 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:58:46.736676 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:58:46.749181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:46.749734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:46.762204 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:58:46.762693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:58:46.774278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:46.774763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:46.787215 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:58:46.787721 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:58:46.799347 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:46.799856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:46.810341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:58:46.822270 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:58:46.834242 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:58:46.847289 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:58:46.860244 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:58:46.896403 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:58:46.910748 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:58:46.929758 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:58:46.939456 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:58:46.939477 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:58:46.940149 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:58:46.966135 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:58:46.978711 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:58:46.989603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:46.991630 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:58:47.001907 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:58:47.012403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:58:47.013160 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:58:47.016861 systemd-journald[1363]: Time spent on flushing to /var/log/journal/b7865dc903c84df8acf6efbda609998d is 13.488ms for 1368 entries. May 13 23:58:47.016861 systemd-journald[1363]: System Journal (/var/log/journal/b7865dc903c84df8acf6efbda609998d) is 8M, max 195.6M, 187.6M free. May 13 23:58:47.041493 systemd-journald[1363]: Received client request to flush runtime journal. May 13 23:58:47.030395 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:58:47.031083 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:47.041040 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:58:47.052960 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:58:47.066295 kernel: loop0: detected capacity change from 0 to 109808 May 13 23:58:47.066700 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:58:47.079840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:58:47.092259 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:58:47.097975 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:58:47.109461 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:58:47.120496 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:58:47.130421 kernel: loop1: detected capacity change from 0 to 151640 May 13 23:58:47.137504 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:58:47.148511 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:47.158479 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:58:47.171341 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:58:47.183138 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:58:47.201621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:58:47.209310 kernel: loop2: detected capacity change from 0 to 210664 May 13 23:58:47.219722 udevadm[1404]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:58:47.224159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:58:47.229495 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:58:47.242822 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. May 13 23:58:47.242832 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. May 13 23:58:47.245237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:58:47.276298 kernel: loop3: detected capacity change from 0 to 8 May 13 23:58:47.318302 kernel: loop4: detected capacity change from 0 to 109808 May 13 23:58:47.336256 kernel: loop5: detected capacity change from 0 to 151640 May 13 23:58:47.361298 kernel: loop6: detected capacity change from 0 to 210664 May 13 23:58:47.362290 ldconfig[1394]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:58:47.365828 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:58:47.382302 kernel: loop7: detected capacity change from 0 to 8 May 13 23:58:47.382304 (sd-merge)[1423]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 13 23:58:47.382561 (sd-merge)[1423]: Merged extensions into '/usr'. May 13 23:58:47.385055 systemd[1]: Reload requested from client PID 1399 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:58:47.385062 systemd[1]: Reloading... May 13 23:58:47.411264 zram_generator::config[1450]: No configuration found. May 13 23:58:47.484283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:47.537081 systemd[1]: Reloading finished in 151 ms. May 13 23:58:47.554834 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:58:47.567632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:58:47.597518 systemd[1]: Starting ensure-sysext.service... May 13 23:58:47.605426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:58:47.617153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:58:47.625304 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:58:47.625499 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:58:47.626092 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:58:47.626295 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. May 13 23:58:47.626338 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. May 13 23:58:47.628539 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:58:47.628543 systemd-tmpfiles[1508]: Skipping /boot May 13 23:58:47.634024 systemd[1]: Reload requested from client PID 1507 ('systemctl') (unit ensure-sysext.service)... May 13 23:58:47.634033 systemd[1]: Reloading... May 13 23:58:47.634845 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:58:47.634850 systemd-tmpfiles[1508]: Skipping /boot May 13 23:58:47.645392 systemd-udevd[1509]: Using default interface naming scheme 'v255'. May 13 23:58:47.662316 zram_generator::config[1538]: No configuration found. May 13 23:58:47.696424 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (1588) May 13 23:58:47.712267 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 May 13 23:58:47.712326 kernel: ACPI: button: Sleep Button [SLPB] May 13 23:58:47.716388 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 23:58:47.716422 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:58:47.716433 kernel: IPMI message handler: version 39.2 May 13 23:58:47.732281 kernel: ACPI: button: Power Button [PWRF] May 13 23:58:47.743396 kernel: ipmi device interface May 13 23:58:47.770940 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface May 13 23:58:47.771162 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set May 13 23:58:47.771306 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface May 13 23:58:47.771427 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt May 13 23:58:47.772939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:47.785301 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) May 13 23:58:47.804746 kernel: ipmi_si: IPMI System Interface driver May 13 23:58:47.804805 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS May 13 23:58:47.812302 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 May 13 23:58:47.818493 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine May 13 23:58:47.824830 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI May 13 23:58:47.833101 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 May 13 23:58:47.842395 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI May 13 23:58:47.848397 kernel: ipmi_si: Adding ACPI-specified kcs state machine May 13 23:58:47.853478 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. May 13 23:58:47.853607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. May 13 23:58:47.858607 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 May 13 23:58:47.859261 kernel: iTCO_vendor_support: vendor-support=0 May 13 23:58:47.874452 systemd[1]: Reloading finished in 240 ms. May 13 23:58:47.906854 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) May 13 23:58:47.907111 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) May 13 23:58:47.907241 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. May 13 23:58:47.933239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:58:47.948842 kernel: intel_rapl_common: Found RAPL domain package May 13 23:58:47.948885 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) May 13 23:58:47.949149 kernel: intel_rapl_common: Found RAPL domain core May 13 23:58:47.955254 kernel: intel_rapl_common: Found RAPL domain dram May 13 23:58:47.989698 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:58:48.004253 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized May 13 23:58:48.012259 kernel: ipmi_ssif: IPMI SSIF Interface driver May 13 23:58:48.021771 systemd[1]: Finished ensure-sysext.service. May 13 23:58:48.037700 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:58:48.061188 systemd[1]: Reached target tpm2.target - Trusted Platform Module. May 13 23:58:48.070310 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:48.070976 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:58:48.085733 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:58:48.096390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:48.096987 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:58:48.106434 augenrules[1715]: No rules May 13 23:58:48.108921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:48.114846 lvm[1708]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:58:48.119883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:58:48.130834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:48.142823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:48.153359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:48.153864 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:58:48.165282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:48.165864 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:58:48.178127 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:58:48.179047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:58:48.179924 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:58:48.194951 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:58:48.220416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:48.231288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:48.239626 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:58:48.239730 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:58:48.251430 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:58:48.251599 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:58:48.251741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:48.251828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:48.251972 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:58:48.252054 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:58:48.252191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:48.252276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:48.252408 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:48.252490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:48.252635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:58:48.252789 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:58:48.256799 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:58:48.257492 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:58:48.257523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:58:48.257555 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:58:48.258117 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:58:48.258902 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:58:48.258926 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:58:48.259131 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:58:48.272350 lvm[1743]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:58:48.275088 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:58:48.289793 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:58:48.311399 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:58:48.323682 systemd-resolved[1728]: Positive Trust Anchors: May 13 23:58:48.323690 systemd-resolved[1728]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:58:48.323715 systemd-resolved[1728]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:58:48.326527 systemd-resolved[1728]: Using system hostname 'ci-4284.0.0-n-b3bb28caaa'. May 13 23:58:48.332418 systemd-networkd[1727]: lo: Link UP May 13 23:58:48.332421 systemd-networkd[1727]: lo: Gained carrier May 13 23:58:48.335041 systemd-networkd[1727]: bond0: netdev ready May 13 23:58:48.336032 systemd-networkd[1727]: Enumeration completed May 13 23:58:48.348268 systemd-networkd[1727]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a4:64.network. May 13 23:58:48.393452 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:58:48.405508 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:58:48.416311 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:58:48.426428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:48.438200 systemd[1]: Reached target network.target - Network. May 13 23:58:48.446286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:58:48.457290 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:58:48.466335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:58:48.477298 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:58:48.488290 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:58:48.499280 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:58:48.499299 systemd[1]: Reached target paths.target - Path Units. May 13 23:58:48.507317 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:58:48.516358 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:58:48.526331 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:58:48.537281 systemd[1]: Reached target timers.target - Timer Units. May 13 23:58:48.545746 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:58:48.555966 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:58:48.565238 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:58:48.583494 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:58:48.593433 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:58:48.604907 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:58:48.615798 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:58:48.626561 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:58:48.636444 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:58:48.646288 systemd[1]: Reached target basic.target - Basic System. May 13 23:58:48.654310 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:58:48.654328 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:58:48.654874 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:58:48.671703 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:58:48.691445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:58:48.699860 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:58:48.704228 coreos-metadata[1770]: May 13 23:58:48.704 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:58:48.705017 coreos-metadata[1770]: May 13 23:58:48.704 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 13 23:58:48.709954 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:58:48.711252 dbus-daemon[1771]: [system] SELinux support is enabled May 13 23:58:48.711733 jq[1774]: false May 13 23:58:48.719351 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:58:48.720005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:58:48.727910 extend-filesystems[1776]: Found loop4 May 13 23:58:48.727910 extend-filesystems[1776]: Found loop5 May 13 23:58:48.727910 extend-filesystems[1776]: Found loop6 May 13 23:58:48.766441 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks May 13 23:58:48.766512 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (1651) May 13 23:58:48.766524 extend-filesystems[1776]: Found loop7 May 13 23:58:48.766524 extend-filesystems[1776]: Found sda May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb1 May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb2 May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb3 May 13 23:58:48.766524 extend-filesystems[1776]: Found usr May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb4 May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb6 May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb7 May 13 23:58:48.766524 extend-filesystems[1776]: Found sdb9 May 13 23:58:48.766524 extend-filesystems[1776]: Checking size of /dev/sdb9 May 13 23:58:48.766524 extend-filesystems[1776]: Resized partition /dev/sdb9 May 13 23:58:48.730094 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:58:48.931060 dbus-daemon[1771]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:58:48.943449 extend-filesystems[1784]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:58:48.754039 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:58:48.784597 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:58:48.789791 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:58:48.803294 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... May 13 23:58:48.817635 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:58:48.817956 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:58:48.952783 update_engine[1801]: I20250513 23:58:48.859941 1801 main.cc:92] Flatcar Update Engine starting May 13 23:58:48.952783 update_engine[1801]: I20250513 23:58:48.860519 1801 update_check_scheduler.cc:74] Next update check in 8m1s May 13 23:58:48.844922 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:58:48.952960 jq[1802]: true May 13 23:58:48.846607 systemd-logind[1796]: Watching system buttons on /dev/input/event3 (Power Button) May 13 23:58:48.846617 systemd-logind[1796]: Watching system buttons on /dev/input/event2 (Sleep Button) May 13 23:58:48.846632 systemd-logind[1796]: Watching system buttons on /dev/input/event0 (HID 0557:2419) May 13 23:58:48.953292 tar[1804]: linux-amd64/helm May 13 23:58:48.846735 systemd-logind[1796]: New seat seat0. May 13 23:58:48.953441 jq[1805]: true May 13 23:58:48.852754 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:58:48.885333 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:58:48.900822 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:58:48.900944 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:58:48.901094 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:58:48.901200 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:58:48.920401 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:58:48.920510 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:58:48.933115 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. May 13 23:58:48.933269 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. May 13 23:58:48.952659 (ntainerd)[1806]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:58:48.957217 systemd[1]: Started update-engine.service - Update Engine. May 13 23:58:48.971442 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:58:48.971600 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:58:48.981013 sshd_keygen[1800]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:58:48.982395 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:58:48.982474 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:58:48.994124 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:58:49.021024 bash[1834]: Updated "/home/core/.ssh/authorized_keys" May 13 23:58:49.022486 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:58:49.033672 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:58:49.045852 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:58:49.053133 locksmithd[1842]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:58:49.078957 systemd[1]: Starting sshkeys.service... May 13 23:58:49.096416 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:58:49.096566 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:58:49.109659 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:58:49.122366 containerd[1806]: time="2025-05-13T23:58:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:58:49.128482 containerd[1806]: time="2025-05-13T23:58:49.122957853Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:58:49.128633 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:58:49.131242 containerd[1806]: time="2025-05-13T23:58:49.131216255Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="4.713µs" May 13 23:58:49.131242 containerd[1806]: time="2025-05-13T23:58:49.131239411Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:58:49.131319 containerd[1806]: time="2025-05-13T23:58:49.131257822Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:58:49.131359 containerd[1806]: time="2025-05-13T23:58:49.131350772Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:58:49.131377 containerd[1806]: time="2025-05-13T23:58:49.131361540Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:58:49.131391 containerd[1806]: time="2025-05-13T23:58:49.131379191Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:58:49.131422 containerd[1806]: time="2025-05-13T23:58:49.131413532Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:58:49.131472 containerd[1806]: time="2025-05-13T23:58:49.131421954Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:58:49.131569 containerd[1806]: time="2025-05-13T23:58:49.131559515Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:58:49.131589 containerd[1806]: time="2025-05-13T23:58:49.131568429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:58:49.131589 containerd[1806]: time="2025-05-13T23:58:49.131574937Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:58:49.131589 containerd[1806]: time="2025-05-13T23:58:49.131582864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:58:49.131641 containerd[1806]: time="2025-05-13T23:58:49.131624354Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:58:49.131754 containerd[1806]: time="2025-05-13T23:58:49.131744733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:58:49.131776 containerd[1806]: time="2025-05-13T23:58:49.131766215Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:58:49.131776 containerd[1806]: time="2025-05-13T23:58:49.131772758Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:58:49.131803 containerd[1806]: time="2025-05-13T23:58:49.131788988Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:58:49.131943 containerd[1806]: time="2025-05-13T23:58:49.131935720Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:58:49.131973 containerd[1806]: time="2025-05-13T23:58:49.131967197Z" level=info msg="metadata content store policy set" policy=shared May 13 23:58:49.140233 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:58:49.142153 containerd[1806]: time="2025-05-13T23:58:49.142137655Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:58:49.142185 containerd[1806]: time="2025-05-13T23:58:49.142164097Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:58:49.142185 containerd[1806]: time="2025-05-13T23:58:49.142173392Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:58:49.142185 containerd[1806]: time="2025-05-13T23:58:49.142180837Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142187735Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142194615Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142201637Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142208533Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142217327Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142223691Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142229470Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:58:49.142241 containerd[1806]: time="2025-05-13T23:58:49.142236402Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142304398Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142316797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142324750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142330997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142337182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142343586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142349992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142355671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142362036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:58:49.142371 containerd[1806]: time="2025-05-13T23:58:49.142368405Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:58:49.142532 containerd[1806]: time="2025-05-13T23:58:49.142373997Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:58:49.142532 containerd[1806]: time="2025-05-13T23:58:49.142410063Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:58:49.142532 containerd[1806]: time="2025-05-13T23:58:49.142421705Z" level=info msg="Start snapshots syncer" May 13 23:58:49.142532 containerd[1806]: time="2025-05-13T23:58:49.142441107Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:58:49.142607 containerd[1806]: time="2025-05-13T23:58:49.142588617Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:58:49.142676 containerd[1806]: time="2025-05-13T23:58:49.142619067Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:58:49.142676 containerd[1806]: time="2025-05-13T23:58:49.142657838Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:58:49.142717 containerd[1806]: time="2025-05-13T23:58:49.142706834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:58:49.142734 containerd[1806]: time="2025-05-13T23:58:49.142724666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:58:49.142734 containerd[1806]: time="2025-05-13T23:58:49.142732623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:58:49.142765 containerd[1806]: time="2025-05-13T23:58:49.142738353Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:58:49.142765 containerd[1806]: time="2025-05-13T23:58:49.142745630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:58:49.142765 containerd[1806]: time="2025-05-13T23:58:49.142751625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:58:49.142765 containerd[1806]: time="2025-05-13T23:58:49.142757643Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142770530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142777917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142785392Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142807089Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142815248Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142820433Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142825977Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142830659Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:58:49.142835 containerd[1806]: time="2025-05-13T23:58:49.142836051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:58:49.142979 containerd[1806]: time="2025-05-13T23:58:49.142841808Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:58:49.142979 containerd[1806]: time="2025-05-13T23:58:49.142851516Z" level=info msg="runtime interface created" May 13 23:58:49.142979 containerd[1806]: time="2025-05-13T23:58:49.142854549Z" level=info msg="created NRI interface" May 13 23:58:49.142979 containerd[1806]: time="2025-05-13T23:58:49.142859029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:58:49.142979 containerd[1806]: time="2025-05-13T23:58:49.142864687Z" level=info msg="Connect containerd service" May 13 23:58:49.142979 containerd[1806]: time="2025-05-13T23:58:49.142879131Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:58:49.143206 containerd[1806]: time="2025-05-13T23:58:49.143195427Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:58:49.165258 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up May 13 23:58:49.174407 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:58:49.177254 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link May 13 23:58:49.180652 systemd-networkd[1727]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a4:65.network. May 13 23:58:49.185882 coreos-metadata[1871]: May 13 23:58:49.185 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:58:49.186739 coreos-metadata[1871]: May 13 23:58:49.186 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 13 23:58:49.190376 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:58:49.199072 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. May 13 23:58:49.208472 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:58:49.214506 tar[1804]: linux-amd64/LICENSE May 13 23:58:49.214546 tar[1804]: linux-amd64/README.md May 13 23:58:49.231234 containerd[1806]: time="2025-05-13T23:58:49.231181031Z" level=info msg="Start subscribing containerd event" May 13 23:58:49.231234 containerd[1806]: time="2025-05-13T23:58:49.231209481Z" level=info msg="Start recovering state" May 13 23:58:49.231316 containerd[1806]: time="2025-05-13T23:58:49.231239536Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:58:49.231316 containerd[1806]: time="2025-05-13T23:58:49.231282516Z" level=info msg="Start event monitor" May 13 23:58:49.231316 containerd[1806]: time="2025-05-13T23:58:49.231283522Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:58:49.231316 containerd[1806]: time="2025-05-13T23:58:49.231297574Z" level=info msg="Start cni network conf syncer for default" May 13 23:58:49.231370 containerd[1806]: time="2025-05-13T23:58:49.231319598Z" level=info msg="Start streaming server" May 13 23:58:49.231370 containerd[1806]: time="2025-05-13T23:58:49.231328410Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:58:49.231370 containerd[1806]: time="2025-05-13T23:58:49.231335606Z" level=info msg="runtime interface starting up..." May 13 23:58:49.231370 containerd[1806]: time="2025-05-13T23:58:49.231340992Z" level=info msg="starting plugins..." May 13 23:58:49.231370 containerd[1806]: time="2025-05-13T23:58:49.231353989Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:58:49.231697 containerd[1806]: time="2025-05-13T23:58:49.231680036Z" level=info msg="containerd successfully booted in 0.109528s" May 13 23:58:49.233343 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:58:49.243839 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:58:49.291304 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 May 13 23:58:49.318484 extend-filesystems[1784]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required May 13 23:58:49.318484 extend-filesystems[1784]: old_desc_blocks = 1, new_desc_blocks = 56 May 13 23:58:49.318484 extend-filesystems[1784]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. May 13 23:58:49.358690 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up May 13 23:58:49.358884 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link May 13 23:58:49.358905 extend-filesystems[1776]: Resized filesystem in /dev/sdb9 May 13 23:58:49.387429 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 13 23:58:49.318974 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:58:49.319095 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:58:49.358885 systemd-networkd[1727]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 13 23:58:49.360262 systemd-networkd[1727]: enp1s0f0np0: Link UP May 13 23:58:49.360496 systemd-networkd[1727]: enp1s0f0np0: Gained carrier May 13 23:58:49.380227 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:58:49.385728 systemd-networkd[1727]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:7e:a4:64.network. May 13 23:58:49.386309 systemd-networkd[1727]: enp1s0f1np1: Link UP May 13 23:58:49.386794 systemd-networkd[1727]: enp1s0f1np1: Gained carrier May 13 23:58:49.398899 systemd-networkd[1727]: bond0: Link UP May 13 23:58:49.399764 systemd-networkd[1727]: bond0: Gained carrier May 13 23:58:49.400423 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:58:49.401973 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:58:49.402873 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:58:49.403359 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:58:49.479299 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex May 13 23:58:49.479428 kernel: bond0: active interface up! May 13 23:58:49.595300 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex May 13 23:58:49.705165 coreos-metadata[1770]: May 13 23:58:49.705 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 13 23:58:50.186812 coreos-metadata[1871]: May 13 23:58:50.186 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 13 23:58:50.689550 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:58:51.073431 systemd-networkd[1727]: bond0: Gained IPv6LL May 13 23:58:51.074010 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:58:51.076472 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:58:51.091382 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:58:51.106123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:51.126602 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:58:51.149403 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:58:51.797657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:51.808760 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:52.308022 kubelet[1915]: E0513 23:58:52.307950 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:52.309137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:52.309215 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:52.309446 systemd[1]: kubelet.service: Consumed 570ms CPU time, 249.3M memory peak. May 13 23:58:53.018382 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 May 13 23:58:53.018530 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity May 13 23:58:53.425851 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:58:53.436102 systemd[1]: Started sshd@0-145.40.90.165:22-139.178.68.195:33526.service - OpenSSH per-connection server daemon (139.178.68.195:33526). May 13 23:58:53.497791 sshd[1936]: Accepted publickey for core from 139.178.68.195 port 33526 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:58:53.498761 sshd-session[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:53.505765 systemd-logind[1796]: New session 1 of user core. May 13 23:58:53.506621 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:58:53.516983 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:58:53.541084 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:58:53.553567 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:58:53.580558 (systemd)[1940]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:58:53.581951 systemd-logind[1796]: New session c1 of user core. May 13 23:58:53.669355 coreos-metadata[1871]: May 13 23:58:53.669 INFO Fetch successful May 13 23:58:53.682144 systemd[1940]: Queued start job for default target default.target. May 13 23:58:53.689882 systemd[1940]: Created slice app.slice - User Application Slice. May 13 23:58:53.689898 systemd[1940]: Reached target paths.target - Paths. May 13 23:58:53.689939 systemd[1940]: Reached target timers.target - Timers. May 13 23:58:53.690611 systemd[1940]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:58:53.696311 systemd[1940]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:58:53.696341 systemd[1940]: Reached target sockets.target - Sockets. May 13 23:58:53.696364 systemd[1940]: Reached target basic.target - Basic System. May 13 23:58:53.696386 systemd[1940]: Reached target default.target - Main User Target. May 13 23:58:53.696401 systemd[1940]: Startup finished in 111ms. May 13 23:58:53.696421 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:58:53.699360 unknown[1871]: wrote ssh authorized keys file for user: core May 13 23:58:53.708954 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:58:53.729183 update-ssh-keys[1948]: Updated "/home/core/.ssh/authorized_keys" May 13 23:58:53.729791 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:58:53.742258 systemd[1]: Finished sshkeys.service. May 13 23:58:53.778669 systemd[1]: Started sshd@1-145.40.90.165:22-139.178.68.195:39028.service - OpenSSH per-connection server daemon (139.178.68.195:39028). May 13 23:58:53.779527 coreos-metadata[1770]: May 13 23:58:53.779 INFO Fetch successful May 13 23:58:53.813960 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:58:53.824258 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 13 23:58:53.846586 sshd[1955]: Accepted publickey for core from 139.178.68.195 port 39028 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:58:53.847256 sshd-session[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:53.849936 systemd-logind[1796]: New session 2 of user core. May 13 23:58:53.850514 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:58:53.906052 sshd[1963]: Connection closed by 139.178.68.195 port 39028 May 13 23:58:53.906180 sshd-session[1955]: pam_unix(sshd:session): session closed for user core May 13 23:58:53.915007 systemd[1]: sshd@1-145.40.90.165:22-139.178.68.195:39028.service: Deactivated successfully. May 13 23:58:53.915719 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:58:53.916322 systemd-logind[1796]: Session 2 logged out. Waiting for processes to exit. May 13 23:58:53.916923 systemd[1]: Started sshd@2-145.40.90.165:22-139.178.68.195:39030.service - OpenSSH per-connection server daemon (139.178.68.195:39030). May 13 23:58:53.929874 systemd-logind[1796]: Removed session 2. May 13 23:58:53.973365 sshd[1968]: Accepted publickey for core from 139.178.68.195 port 39030 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:58:53.973971 sshd-session[1968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:53.976528 systemd-logind[1796]: New session 3 of user core. May 13 23:58:53.984350 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:58:54.040991 sshd[1971]: Connection closed by 139.178.68.195 port 39030 May 13 23:58:54.041145 sshd-session[1968]: pam_unix(sshd:session): session closed for user core May 13 23:58:54.042873 systemd[1]: sshd@2-145.40.90.165:22-139.178.68.195:39030.service: Deactivated successfully. May 13 23:58:54.043680 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:58:54.044060 systemd-logind[1796]: Session 3 logged out. Waiting for processes to exit. May 13 23:58:54.044629 systemd-logind[1796]: Removed session 3. May 13 23:58:54.207875 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 13 23:58:54.210370 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:58:54.210823 systemd[1]: Startup finished in 2.690s (kernel) + 22.066s (initrd) + 9.371s (userspace) = 34.127s. May 13 23:58:54.257428 login[1885]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:58:54.260992 systemd-logind[1796]: New session 4 of user core. May 13 23:58:54.263870 login[1884]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:58:54.276525 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:58:54.278980 systemd-logind[1796]: New session 5 of user core. May 13 23:58:54.279543 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:58:55.850045 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:59:02.324812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:59:02.327175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:02.569217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:02.571384 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:02.595171 kubelet[2012]: E0513 23:59:02.595076 2012 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:02.597174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:02.597266 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:02.597430 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.2M memory peak. May 13 23:59:04.057941 systemd[1]: Started sshd@3-145.40.90.165:22-139.178.68.195:35622.service - OpenSSH per-connection server daemon (139.178.68.195:35622). May 13 23:59:04.101038 sshd[2033]: Accepted publickey for core from 139.178.68.195 port 35622 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:59:04.101641 sshd-session[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.104208 systemd-logind[1796]: New session 6 of user core. May 13 23:59:04.112483 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:59:04.162475 sshd[2035]: Connection closed by 139.178.68.195 port 35622 May 13 23:59:04.162729 sshd-session[2033]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.176386 systemd[1]: sshd@3-145.40.90.165:22-139.178.68.195:35622.service: Deactivated successfully. May 13 23:59:04.178001 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:59:04.179637 systemd-logind[1796]: Session 6 logged out. Waiting for processes to exit. May 13 23:59:04.181153 systemd[1]: Started sshd@4-145.40.90.165:22-139.178.68.195:35638.service - OpenSSH per-connection server daemon (139.178.68.195:35638). May 13 23:59:04.182348 systemd-logind[1796]: Removed session 6. May 13 23:59:04.269684 sshd[2040]: Accepted publickey for core from 139.178.68.195 port 35638 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:59:04.270305 sshd-session[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.273178 systemd-logind[1796]: New session 7 of user core. May 13 23:59:04.282515 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:59:04.328712 sshd[2043]: Connection closed by 139.178.68.195 port 35638 May 13 23:59:04.328963 sshd-session[2040]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.341478 systemd[1]: sshd@4-145.40.90.165:22-139.178.68.195:35638.service: Deactivated successfully. May 13 23:59:04.343274 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:59:04.343939 systemd-logind[1796]: Session 7 logged out. Waiting for processes to exit. May 13 23:59:04.344792 systemd[1]: Started sshd@5-145.40.90.165:22-139.178.68.195:35654.service - OpenSSH per-connection server daemon (139.178.68.195:35654). May 13 23:59:04.345196 systemd-logind[1796]: Removed session 7. May 13 23:59:04.379713 sshd[2048]: Accepted publickey for core from 139.178.68.195 port 35654 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:59:04.380352 sshd-session[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.383066 systemd-logind[1796]: New session 8 of user core. May 13 23:59:04.401518 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:59:04.457779 sshd[2052]: Connection closed by 139.178.68.195 port 35654 May 13 23:59:04.458423 sshd-session[2048]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.481813 systemd[1]: sshd@5-145.40.90.165:22-139.178.68.195:35654.service: Deactivated successfully. May 13 23:59:04.485548 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:59:04.487637 systemd-logind[1796]: Session 8 logged out. Waiting for processes to exit. May 13 23:59:04.491725 systemd[1]: Started sshd@6-145.40.90.165:22-139.178.68.195:35666.service - OpenSSH per-connection server daemon (139.178.68.195:35666). May 13 23:59:04.494343 systemd-logind[1796]: Removed session 8. May 13 23:59:04.582860 sshd[2057]: Accepted publickey for core from 139.178.68.195 port 35666 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:59:04.583441 sshd-session[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.586128 systemd-logind[1796]: New session 9 of user core. May 13 23:59:04.599532 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:59:04.655710 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:59:04.655853 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:04.679435 sudo[2061]: pam_unix(sudo:session): session closed for user root May 13 23:59:04.680373 sshd[2060]: Connection closed by 139.178.68.195 port 35666 May 13 23:59:04.680568 sshd-session[2057]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.705464 systemd[1]: sshd@6-145.40.90.165:22-139.178.68.195:35666.service: Deactivated successfully. May 13 23:59:04.707223 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:59:04.708236 systemd-logind[1796]: Session 9 logged out. Waiting for processes to exit. May 13 23:59:04.710274 systemd[1]: Started sshd@7-145.40.90.165:22-139.178.68.195:35672.service - OpenSSH per-connection server daemon (139.178.68.195:35672). May 13 23:59:04.711641 systemd-logind[1796]: Removed session 9. May 13 23:59:04.795694 sshd[2066]: Accepted publickey for core from 139.178.68.195 port 35672 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:59:04.796305 sshd-session[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.799069 systemd-logind[1796]: New session 10 of user core. May 13 23:59:04.807542 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:59:04.857534 sudo[2071]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:59:04.857803 sudo[2071]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:04.860591 sudo[2071]: pam_unix(sudo:session): session closed for user root May 13 23:59:04.865119 sudo[2070]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:59:04.865394 sudo[2070]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:04.875744 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:59:04.918000 augenrules[2093]: No rules May 13 23:59:04.918538 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:59:04.918747 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:59:04.919558 sudo[2070]: pam_unix(sudo:session): session closed for user root May 13 23:59:04.920662 sshd[2069]: Connection closed by 139.178.68.195 port 35672 May 13 23:59:04.920934 sshd-session[2066]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.945138 systemd[1]: sshd@7-145.40.90.165:22-139.178.68.195:35672.service: Deactivated successfully. May 13 23:59:04.948536 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:59:04.950537 systemd-logind[1796]: Session 10 logged out. Waiting for processes to exit. May 13 23:59:04.954131 systemd[1]: Started sshd@8-145.40.90.165:22-139.178.68.195:35674.service - OpenSSH per-connection server daemon (139.178.68.195:35674). May 13 23:59:04.956611 systemd-logind[1796]: Removed session 10. May 13 23:59:05.055686 sshd[2101]: Accepted publickey for core from 139.178.68.195 port 35674 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 13 23:59:05.056323 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:05.059043 systemd-logind[1796]: New session 11 of user core. May 13 23:59:05.069531 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:59:05.119297 sudo[2105]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:59:05.119585 sudo[2105]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:05.570594 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:59:05.583642 (dockerd)[2131]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:59:05.895533 dockerd[2131]: time="2025-05-13T23:59:05.895481639Z" level=info msg="Starting up" May 13 23:59:05.896927 dockerd[2131]: time="2025-05-13T23:59:05.896874835Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:59:05.942736 dockerd[2131]: time="2025-05-13T23:59:05.942690110Z" level=info msg="Loading containers: start." May 13 23:59:06.055262 kernel: Initializing XFRM netlink socket May 13 23:59:06.055440 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. May 13 23:59:06.136568 systemd-networkd[1727]: docker0: Link UP May 13 23:59:06.206360 dockerd[2131]: time="2025-05-13T23:59:06.206211722Z" level=info msg="Loading containers: done." May 13 23:59:06.224814 dockerd[2131]: time="2025-05-13T23:59:06.224793509Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:59:06.224905 dockerd[2131]: time="2025-05-13T23:59:06.224835443Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:59:06.224905 dockerd[2131]: time="2025-05-13T23:59:06.224890046Z" level=info msg="Daemon has completed initialization" May 13 23:59:06.225686 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3635243962-merged.mount: Deactivated successfully. May 13 23:59:06.239579 dockerd[2131]: time="2025-05-13T23:59:06.239526071Z" level=info msg="API listen on /run/docker.sock" May 13 23:59:06.239593 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:59:07.001532 systemd-resolved[1728]: Clock change detected. Flushing caches. May 13 23:59:07.001582 systemd-timesyncd[1729]: Contacted time server [2602:f9ab:2:3d00::a]:123 (2.flatcar.pool.ntp.org). May 13 23:59:07.001629 systemd-timesyncd[1729]: Initial clock synchronization to Tue 2025-05-13 23:59:07.001440 UTC. May 13 23:59:07.599868 containerd[1806]: time="2025-05-13T23:59:07.599846313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:59:08.148404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931173050.mount: Deactivated successfully. May 13 23:59:09.010621 containerd[1806]: time="2025-05-13T23:59:09.010561451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:09.010829 containerd[1806]: time="2025-05-13T23:59:09.010741771Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 23:59:09.011132 containerd[1806]: time="2025-05-13T23:59:09.011087235Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:09.012421 containerd[1806]: time="2025-05-13T23:59:09.012381067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:09.012946 containerd[1806]: time="2025-05-13T23:59:09.012902831Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.413034314s" May 13 23:59:09.012946 containerd[1806]: time="2025-05-13T23:59:09.012921480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 23:59:09.021926 containerd[1806]: time="2025-05-13T23:59:09.021908848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:59:10.093549 containerd[1806]: time="2025-05-13T23:59:10.093525477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:10.093801 containerd[1806]: time="2025-05-13T23:59:10.093779976Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 23:59:10.094224 containerd[1806]: time="2025-05-13T23:59:10.094184420Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:10.095355 containerd[1806]: time="2025-05-13T23:59:10.095314716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:10.095884 containerd[1806]: time="2025-05-13T23:59:10.095843059Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.073914975s" May 13 23:59:10.095884 containerd[1806]: time="2025-05-13T23:59:10.095859264Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 23:59:10.106002 containerd[1806]: time="2025-05-13T23:59:10.105982783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:59:11.012653 containerd[1806]: time="2025-05-13T23:59:11.012626495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:11.012868 containerd[1806]: time="2025-05-13T23:59:11.012843034Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 23:59:11.013269 containerd[1806]: time="2025-05-13T23:59:11.013252389Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:11.014511 containerd[1806]: time="2025-05-13T23:59:11.014471544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:11.015063 containerd[1806]: time="2025-05-13T23:59:11.015047569Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 909.044966ms" May 13 23:59:11.015116 containerd[1806]: time="2025-05-13T23:59:11.015064299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 23:59:11.024416 containerd[1806]: time="2025-05-13T23:59:11.024396381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:59:11.788737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671587698.mount: Deactivated successfully. May 13 23:59:11.973015 containerd[1806]: time="2025-05-13T23:59:11.972989953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:11.973254 containerd[1806]: time="2025-05-13T23:59:11.973231896Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 23:59:11.973565 containerd[1806]: time="2025-05-13T23:59:11.973552167Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:11.974277 containerd[1806]: time="2025-05-13T23:59:11.974237765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:11.974866 containerd[1806]: time="2025-05-13T23:59:11.974824393Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 950.406868ms" May 13 23:59:11.974866 containerd[1806]: time="2025-05-13T23:59:11.974842139Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 23:59:11.984254 containerd[1806]: time="2025-05-13T23:59:11.984210281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:59:12.501171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835702559.mount: Deactivated successfully. May 13 23:59:13.007139 containerd[1806]: time="2025-05-13T23:59:13.007114574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:13.007350 containerd[1806]: time="2025-05-13T23:59:13.007325906Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 23:59:13.007740 containerd[1806]: time="2025-05-13T23:59:13.007730134Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:13.008888 containerd[1806]: time="2025-05-13T23:59:13.008876915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:13.009569 containerd[1806]: time="2025-05-13T23:59:13.009516327Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.02528293s" May 13 23:59:13.009569 containerd[1806]: time="2025-05-13T23:59:13.009540132Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:59:13.018973 containerd[1806]: time="2025-05-13T23:59:13.018947653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:59:13.421593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:59:13.422549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:13.501421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160718413.mount: Deactivated successfully. May 13 23:59:13.610145 containerd[1806]: time="2025-05-13T23:59:13.610095276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:13.643257 containerd[1806]: time="2025-05-13T23:59:13.643214673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 23:59:13.663952 containerd[1806]: time="2025-05-13T23:59:13.663918083Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:13.664670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:13.665380 containerd[1806]: time="2025-05-13T23:59:13.665366870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:13.665799 containerd[1806]: time="2025-05-13T23:59:13.665785203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 646.81202ms" May 13 23:59:13.665839 containerd[1806]: time="2025-05-13T23:59:13.665802610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 23:59:13.667168 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:13.675766 containerd[1806]: time="2025-05-13T23:59:13.675711614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 23:59:13.695284 kubelet[2557]: E0513 23:59:13.695206 2557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:13.696807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:13.696908 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:13.697107 systemd[1]: kubelet.service: Consumed 108ms CPU time, 109.4M memory peak. May 13 23:59:14.157443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036224230.mount: Deactivated successfully. May 13 23:59:15.268571 containerd[1806]: time="2025-05-13T23:59:15.268538488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:15.268842 containerd[1806]: time="2025-05-13T23:59:15.268744814Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 13 23:59:15.269177 containerd[1806]: time="2025-05-13T23:59:15.269163851Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:15.270528 containerd[1806]: time="2025-05-13T23:59:15.270514528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:15.271137 containerd[1806]: time="2025-05-13T23:59:15.271122377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.595389795s" May 13 23:59:15.271175 containerd[1806]: time="2025-05-13T23:59:15.271139542Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 23:59:17.285187 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:17.285315 systemd[1]: kubelet.service: Consumed 108ms CPU time, 109.4M memory peak. May 13 23:59:17.286663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:17.313048 systemd[1]: Reload requested from client PID 2769 ('systemctl') (unit session-11.scope)... May 13 23:59:17.313055 systemd[1]: Reloading... May 13 23:59:17.363961 zram_generator::config[2815]: No configuration found. May 13 23:59:17.435835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:17.518737 systemd[1]: Reloading finished in 205 ms. May 13 23:59:17.557520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:17.559384 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:17.559678 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:59:17.559785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:17.559805 systemd[1]: kubelet.service: Consumed 51ms CPU time, 83.5M memory peak. May 13 23:59:17.560636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:17.769300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:17.771704 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:59:17.805914 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:17.805914 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:59:17.805914 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:17.806157 kubelet[2886]: I0513 23:59:17.805913 2886 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:59:18.234809 kubelet[2886]: I0513 23:59:18.234762 2886 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:59:18.234809 kubelet[2886]: I0513 23:59:18.234777 2886 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:59:18.234896 kubelet[2886]: I0513 23:59:18.234890 2886 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:59:18.245216 kubelet[2886]: I0513 23:59:18.245167 2886 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:59:18.246310 kubelet[2886]: E0513 23:59:18.246278 2886 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://145.40.90.165:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.260599 kubelet[2886]: I0513 23:59:18.260563 2886 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:59:18.260752 kubelet[2886]: I0513 23:59:18.260694 2886 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:59:18.260837 kubelet[2886]: I0513 23:59:18.260708 2886 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-b3bb28caaa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:59:18.260918 kubelet[2886]: I0513 23:59:18.260844 2886 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:59:18.260918 kubelet[2886]: I0513 23:59:18.260849 2886 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:59:18.260918 kubelet[2886]: I0513 23:59:18.260903 2886 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:18.261678 kubelet[2886]: I0513 23:59:18.261643 2886 kubelet.go:400] "Attempting to sync node with API server" May 13 23:59:18.261678 kubelet[2886]: I0513 23:59:18.261652 2886 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:59:18.261678 kubelet[2886]: I0513 23:59:18.261662 2886 kubelet.go:312] "Adding apiserver pod source" May 13 23:59:18.261678 kubelet[2886]: I0513 23:59:18.261670 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:59:18.264276 kubelet[2886]: W0513 23:59:18.264229 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.90.165:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.264310 kubelet[2886]: W0513 23:59:18.264248 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b3bb28caaa&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.264310 kubelet[2886]: E0513 23:59:18.264280 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.165:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.264310 kubelet[2886]: E0513 23:59:18.264294 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b3bb28caaa&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.265019 kubelet[2886]: I0513 23:59:18.265006 2886 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:59:18.266222 kubelet[2886]: I0513 23:59:18.266169 2886 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:59:18.266222 kubelet[2886]: W0513 23:59:18.266217 2886 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:59:18.266550 kubelet[2886]: I0513 23:59:18.266518 2886 server.go:1264] "Started kubelet" May 13 23:59:18.268991 kubelet[2886]: I0513 23:59:18.266567 2886 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:59:18.269160 kubelet[2886]: I0513 23:59:18.268860 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:59:18.269559 kubelet[2886]: I0513 23:59:18.269299 2886 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:59:18.273459 kubelet[2886]: E0513 23:59:18.273429 2886 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:59:18.273459 kubelet[2886]: I0513 23:59:18.273460 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:59:18.273553 kubelet[2886]: I0513 23:59:18.273518 2886 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:59:18.273553 kubelet[2886]: E0513 23:59:18.273520 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:18.273553 kubelet[2886]: I0513 23:59:18.273543 2886 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:59:18.273634 kubelet[2886]: I0513 23:59:18.273571 2886 reconciler.go:26] "Reconciler: start to sync state" May 13 23:59:18.273708 kubelet[2886]: I0513 23:59:18.273697 2886 server.go:455] "Adding debug handlers to kubelet server" May 13 23:59:18.273741 kubelet[2886]: E0513 23:59:18.273704 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b3bb28caaa?timeout=10s\": dial tcp 145.40.90.165:6443: connect: connection refused" interval="200ms" May 13 23:59:18.273778 kubelet[2886]: W0513 23:59:18.273733 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.273778 kubelet[2886]: E0513 23:59:18.273762 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.165:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.273843 kubelet[2886]: I0513 23:59:18.273813 2886 factory.go:221] Registration of the systemd container factory successfully May 13 23:59:18.273874 kubelet[2886]: I0513 23:59:18.273860 2886 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:59:18.274269 kubelet[2886]: E0513 23:59:18.274213 2886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.165:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.165:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-b3bb28caaa.183f3ba486df31a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-b3bb28caaa,UID:ci-4284.0.0-n-b3bb28caaa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-b3bb28caaa,},FirstTimestamp:2025-05-13 23:59:18.266491303 +0000 UTC m=+0.492975634,LastTimestamp:2025-05-13 23:59:18.266491303 +0000 UTC m=+0.492975634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-b3bb28caaa,}" May 13 23:59:18.274269 kubelet[2886]: I0513 23:59:18.274264 2886 factory.go:221] Registration of the containerd container factory successfully May 13 23:59:18.280794 kubelet[2886]: I0513 23:59:18.280779 2886 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:59:18.280794 kubelet[2886]: I0513 23:59:18.280791 2886 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:59:18.280889 kubelet[2886]: I0513 23:59:18.280806 2886 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:18.281388 kubelet[2886]: I0513 23:59:18.281372 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:59:18.281669 kubelet[2886]: I0513 23:59:18.281661 2886 policy_none.go:49] "None policy: Start" May 13 23:59:18.281903 kubelet[2886]: I0513 23:59:18.281894 2886 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:59:18.281930 kubelet[2886]: I0513 23:59:18.281909 2886 state_mem.go:35] "Initializing new in-memory state store" May 13 23:59:18.281978 kubelet[2886]: I0513 23:59:18.281969 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:59:18.282019 kubelet[2886]: I0513 23:59:18.281985 2886 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:59:18.282019 kubelet[2886]: I0513 23:59:18.281995 2886 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:59:18.282055 kubelet[2886]: E0513 23:59:18.282016 2886 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:59:18.282247 kubelet[2886]: W0513 23:59:18.282225 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.282273 kubelet[2886]: E0513 23:59:18.282256 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.165:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:18.286355 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:59:18.300643 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:59:18.302463 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:59:18.322067 kubelet[2886]: I0513 23:59:18.322014 2886 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:59:18.322287 kubelet[2886]: I0513 23:59:18.322213 2886 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:59:18.322358 kubelet[2886]: I0513 23:59:18.322335 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:59:18.323049 kubelet[2886]: E0513 23:59:18.323035 2886 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:18.377864 kubelet[2886]: I0513 23:59:18.377804 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.378625 kubelet[2886]: E0513 23:59:18.378558 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.165:6443/api/v1/nodes\": dial tcp 145.40.90.165:6443: connect: connection refused" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.382874 kubelet[2886]: I0513 23:59:18.382811 2886 topology_manager.go:215] "Topology Admit Handler" podUID="ad5351f21afef10fe1082c7d0772729d" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.386485 kubelet[2886]: I0513 23:59:18.386432 2886 topology_manager.go:215] "Topology Admit Handler" podUID="d4592f8c43154697080346e9fcffd1f8" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.389868 kubelet[2886]: I0513 23:59:18.389810 2886 topology_manager.go:215] "Topology Admit Handler" podUID="932cfcbef2b73e71699ca04c6ccd45a0" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.404312 systemd[1]: Created slice kubepods-burstable-podad5351f21afef10fe1082c7d0772729d.slice - libcontainer container kubepods-burstable-podad5351f21afef10fe1082c7d0772729d.slice. May 13 23:59:18.438327 systemd[1]: Created slice kubepods-burstable-podd4592f8c43154697080346e9fcffd1f8.slice - libcontainer container kubepods-burstable-podd4592f8c43154697080346e9fcffd1f8.slice. May 13 23:59:18.460880 systemd[1]: Created slice kubepods-burstable-pod932cfcbef2b73e71699ca04c6ccd45a0.slice - libcontainer container kubepods-burstable-pod932cfcbef2b73e71699ca04c6ccd45a0.slice. May 13 23:59:18.474404 kubelet[2886]: I0513 23:59:18.474321 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad5351f21afef10fe1082c7d0772729d-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" (UID: \"ad5351f21afef10fe1082c7d0772729d\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.474703 kubelet[2886]: E0513 23:59:18.474600 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b3bb28caaa?timeout=10s\": dial tcp 145.40.90.165:6443: connect: connection refused" interval="400ms" May 13 23:59:18.575464 kubelet[2886]: I0513 23:59:18.575202 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad5351f21afef10fe1082c7d0772729d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" (UID: \"ad5351f21afef10fe1082c7d0772729d\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.575464 kubelet[2886]: I0513 23:59:18.575378 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.575998 kubelet[2886]: I0513 23:59:18.575476 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.575998 kubelet[2886]: I0513 23:59:18.575659 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.575998 kubelet[2886]: I0513 23:59:18.575770 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.575998 kubelet[2886]: I0513 23:59:18.575867 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/932cfcbef2b73e71699ca04c6ccd45a0-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-b3bb28caaa\" (UID: \"932cfcbef2b73e71699ca04c6ccd45a0\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.576519 kubelet[2886]: I0513 23:59:18.576057 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad5351f21afef10fe1082c7d0772729d-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" (UID: \"ad5351f21afef10fe1082c7d0772729d\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.576519 kubelet[2886]: I0513 23:59:18.576164 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.583302 kubelet[2886]: I0513 23:59:18.583251 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.583943 kubelet[2886]: E0513 23:59:18.583868 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.165:6443/api/v1/nodes\": dial tcp 145.40.90.165:6443: connect: connection refused" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.734127 containerd[1806]: time="2025-05-13T23:59:18.734038762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-b3bb28caaa,Uid:ad5351f21afef10fe1082c7d0772729d,Namespace:kube-system,Attempt:0,}" May 13 23:59:18.754976 containerd[1806]: time="2025-05-13T23:59:18.754950262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-b3bb28caaa,Uid:d4592f8c43154697080346e9fcffd1f8,Namespace:kube-system,Attempt:0,}" May 13 23:59:18.765867 containerd[1806]: time="2025-05-13T23:59:18.765807193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-b3bb28caaa,Uid:932cfcbef2b73e71699ca04c6ccd45a0,Namespace:kube-system,Attempt:0,}" May 13 23:59:18.875881 kubelet[2886]: E0513 23:59:18.875740 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.165:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b3bb28caaa?timeout=10s\": dial tcp 145.40.90.165:6443: connect: connection refused" interval="800ms" May 13 23:59:18.985598 kubelet[2886]: I0513 23:59:18.985562 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:18.985833 kubelet[2886]: E0513 23:59:18.985787 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.165:6443/api/v1/nodes\": dial tcp 145.40.90.165:6443: connect: connection refused" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:19.218059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563392091.mount: Deactivated successfully. May 13 23:59:19.219742 containerd[1806]: time="2025-05-13T23:59:19.219695388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:19.219875 containerd[1806]: time="2025-05-13T23:59:19.219855637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:59:19.220293 containerd[1806]: time="2025-05-13T23:59:19.220226806Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:19.221264 containerd[1806]: time="2025-05-13T23:59:19.221228197Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:19.221504 containerd[1806]: time="2025-05-13T23:59:19.221465838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:59:19.221794 containerd[1806]: time="2025-05-13T23:59:19.221741958Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:19.222077 containerd[1806]: time="2025-05-13T23:59:19.221994881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:59:19.223259 containerd[1806]: time="2025-05-13T23:59:19.223213184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.197662ms" May 13 23:59:19.223524 containerd[1806]: time="2025-05-13T23:59:19.223481591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:59:19.224338 containerd[1806]: time="2025-05-13T23:59:19.224298239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 456.654585ms" May 13 23:59:19.225393 containerd[1806]: time="2025-05-13T23:59:19.225357027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.540897ms" May 13 23:59:19.231853 containerd[1806]: time="2025-05-13T23:59:19.231808264Z" level=info msg="connecting to shim fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b" address="unix:///run/containerd/s/f09185e746b85ccd96220d8930354fef84ffe8ada41cf280388b3a845c284e7f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:19.231989 containerd[1806]: time="2025-05-13T23:59:19.231890626Z" level=info msg="connecting to shim 0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326" address="unix:///run/containerd/s/5830cf23f4bf861da655f2028f86d39c1eb078aba4b2391ab09b512bb25fdd86" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:19.233841 containerd[1806]: time="2025-05-13T23:59:19.233800201Z" level=info msg="connecting to shim 2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c" address="unix:///run/containerd/s/e3f2af4ee43c7089f7c26fb83be91e0e48c548419f5b92bb0e88e4404de14b07" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:19.238255 kubelet[2886]: W0513 23:59:19.238192 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b3bb28caaa&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:19.238334 kubelet[2886]: E0513 23:59:19.238264 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.90.165:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b3bb28caaa&limit=500&resourceVersion=0": dial tcp 145.40.90.165:6443: connect: connection refused May 13 23:59:19.259229 systemd[1]: Started cri-containerd-0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326.scope - libcontainer container 0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326. May 13 23:59:19.260145 systemd[1]: Started cri-containerd-2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c.scope - libcontainer container 2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c. May 13 23:59:19.260988 systemd[1]: Started cri-containerd-fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b.scope - libcontainer container fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b. May 13 23:59:19.286072 containerd[1806]: time="2025-05-13T23:59:19.286036839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-b3bb28caaa,Uid:932cfcbef2b73e71699ca04c6ccd45a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326\"" May 13 23:59:19.286317 containerd[1806]: time="2025-05-13T23:59:19.286212403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-b3bb28caaa,Uid:d4592f8c43154697080346e9fcffd1f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c\"" May 13 23:59:19.287085 containerd[1806]: time="2025-05-13T23:59:19.287070114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-b3bb28caaa,Uid:ad5351f21afef10fe1082c7d0772729d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b\"" May 13 23:59:19.288219 containerd[1806]: time="2025-05-13T23:59:19.288208949Z" level=info msg="CreateContainer within sandbox \"0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:59:19.288245 containerd[1806]: time="2025-05-13T23:59:19.288230700Z" level=info msg="CreateContainer within sandbox \"2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:59:19.288318 containerd[1806]: time="2025-05-13T23:59:19.288210674Z" level=info msg="CreateContainer within sandbox \"fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:59:19.293540 containerd[1806]: time="2025-05-13T23:59:19.293524283Z" level=info msg="Container 558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:19.294578 containerd[1806]: time="2025-05-13T23:59:19.294565107Z" level=info msg="Container f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:19.295157 containerd[1806]: time="2025-05-13T23:59:19.295143859Z" level=info msg="Container d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:19.296935 containerd[1806]: time="2025-05-13T23:59:19.296920755Z" level=info msg="CreateContainer within sandbox \"0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341\"" May 13 23:59:19.297402 containerd[1806]: time="2025-05-13T23:59:19.297376098Z" level=info msg="StartContainer for \"558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341\"" May 13 23:59:19.298119 containerd[1806]: time="2025-05-13T23:59:19.298084263Z" level=info msg="connecting to shim 558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341" address="unix:///run/containerd/s/5830cf23f4bf861da655f2028f86d39c1eb078aba4b2391ab09b512bb25fdd86" protocol=ttrpc version=3 May 13 23:59:19.298240 containerd[1806]: time="2025-05-13T23:59:19.298228573Z" level=info msg="CreateContainer within sandbox \"fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010\"" May 13 23:59:19.298406 containerd[1806]: time="2025-05-13T23:59:19.298376990Z" level=info msg="StartContainer for \"f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010\"" May 13 23:59:19.298957 containerd[1806]: time="2025-05-13T23:59:19.298945420Z" level=info msg="connecting to shim f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010" address="unix:///run/containerd/s/f09185e746b85ccd96220d8930354fef84ffe8ada41cf280388b3a845c284e7f" protocol=ttrpc version=3 May 13 23:59:19.299595 containerd[1806]: time="2025-05-13T23:59:19.299581351Z" level=info msg="CreateContainer within sandbox \"2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19\"" May 13 23:59:19.299749 containerd[1806]: time="2025-05-13T23:59:19.299737888Z" level=info msg="StartContainer for \"d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19\"" May 13 23:59:19.300327 containerd[1806]: time="2025-05-13T23:59:19.300299426Z" level=info msg="connecting to shim d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19" address="unix:///run/containerd/s/e3f2af4ee43c7089f7c26fb83be91e0e48c548419f5b92bb0e88e4404de14b07" protocol=ttrpc version=3 May 13 23:59:19.318247 systemd[1]: Started cri-containerd-558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341.scope - libcontainer container 558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341. May 13 23:59:19.318847 systemd[1]: Started cri-containerd-f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010.scope - libcontainer container f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010. May 13 23:59:19.320404 systemd[1]: Started cri-containerd-d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19.scope - libcontainer container d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19. May 13 23:59:19.345704 containerd[1806]: time="2025-05-13T23:59:19.345677043Z" level=info msg="StartContainer for \"f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010\" returns successfully" May 13 23:59:19.345827 containerd[1806]: time="2025-05-13T23:59:19.345802589Z" level=info msg="StartContainer for \"558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341\" returns successfully" May 13 23:59:19.347110 containerd[1806]: time="2025-05-13T23:59:19.347093263Z" level=info msg="StartContainer for \"d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19\" returns successfully" May 13 23:59:19.789176 kubelet[2886]: I0513 23:59:19.789161 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:19.862388 kubelet[2886]: E0513 23:59:19.862367 2886 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-b3bb28caaa\" not found" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:19.965089 kubelet[2886]: I0513 23:59:19.965037 2886 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:19.968850 kubelet[2886]: E0513 23:59:19.968835 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.069450 kubelet[2886]: E0513 23:59:20.069394 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.170285 kubelet[2886]: E0513 23:59:20.170266 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.270387 kubelet[2886]: E0513 23:59:20.270327 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.371538 kubelet[2886]: E0513 23:59:20.371303 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.472316 kubelet[2886]: E0513 23:59:20.472234 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.573237 kubelet[2886]: E0513 23:59:20.573130 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.674304 kubelet[2886]: E0513 23:59:20.674198 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.775153 kubelet[2886]: E0513 23:59:20.775049 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.876147 kubelet[2886]: E0513 23:59:20.876047 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:20.977216 kubelet[2886]: E0513 23:59:20.977018 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:21.078224 kubelet[2886]: E0513 23:59:21.078135 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:21.179175 kubelet[2886]: E0513 23:59:21.179091 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:21.280281 kubelet[2886]: E0513 23:59:21.280075 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:21.380741 kubelet[2886]: E0513 23:59:21.380674 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:21.481725 kubelet[2886]: E0513 23:59:21.481654 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:22.098411 kubelet[2886]: W0513 23:59:22.098303 2886 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:22.210453 systemd[1]: Reload requested from client PID 3208 ('systemctl') (unit session-11.scope)... May 13 23:59:22.210462 systemd[1]: Reloading... May 13 23:59:22.255943 zram_generator::config[3254]: No configuration found. May 13 23:59:22.263309 kubelet[2886]: I0513 23:59:22.263262 2886 apiserver.go:52] "Watching apiserver" May 13 23:59:22.273989 kubelet[2886]: I0513 23:59:22.273921 2886 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:59:22.301013 kubelet[2886]: W0513 23:59:22.300997 2886 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:22.330103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:22.421762 systemd[1]: Reloading finished in 211 ms. May 13 23:59:22.444901 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:22.452926 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:59:22.453079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:22.453106 systemd[1]: kubelet.service: Consumed 895ms CPU time, 128.4M memory peak. May 13 23:59:22.454352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:22.725349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:22.727476 (kubelet)[3318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:59:22.748866 kubelet[3318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:22.748866 kubelet[3318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:59:22.748866 kubelet[3318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:22.748866 kubelet[3318]: I0513 23:59:22.748864 3318 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:59:22.751428 kubelet[3318]: I0513 23:59:22.751381 3318 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:59:22.751428 kubelet[3318]: I0513 23:59:22.751392 3318 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:59:22.751530 kubelet[3318]: I0513 23:59:22.751486 3318 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:59:22.752224 kubelet[3318]: I0513 23:59:22.752189 3318 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:59:22.753487 kubelet[3318]: I0513 23:59:22.753447 3318 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:59:22.762699 kubelet[3318]: I0513 23:59:22.762688 3318 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:59:22.762815 kubelet[3318]: I0513 23:59:22.762798 3318 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:59:22.762911 kubelet[3318]: I0513 23:59:22.762815 3318 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-b3bb28caaa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:59:22.762971 kubelet[3318]: I0513 23:59:22.762920 3318 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:59:22.762971 kubelet[3318]: I0513 23:59:22.762927 3318 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:59:22.762971 kubelet[3318]: I0513 23:59:22.762959 3318 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:22.763033 kubelet[3318]: I0513 23:59:22.763011 3318 kubelet.go:400] "Attempting to sync node with API server" May 13 23:59:22.763033 kubelet[3318]: I0513 23:59:22.763017 3318 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:59:22.763033 kubelet[3318]: I0513 23:59:22.763028 3318 kubelet.go:312] "Adding apiserver pod source" May 13 23:59:22.763092 kubelet[3318]: I0513 23:59:22.763037 3318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:59:22.763406 kubelet[3318]: I0513 23:59:22.763386 3318 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:59:22.763574 kubelet[3318]: I0513 23:59:22.763506 3318 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:59:22.763827 kubelet[3318]: I0513 23:59:22.763820 3318 server.go:1264] "Started kubelet" May 13 23:59:22.763882 kubelet[3318]: I0513 23:59:22.763861 3318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:59:22.763927 kubelet[3318]: I0513 23:59:22.763874 3318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:59:22.764031 kubelet[3318]: I0513 23:59:22.764023 3318 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:59:22.765053 kubelet[3318]: I0513 23:59:22.765039 3318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:59:22.765124 kubelet[3318]: I0513 23:59:22.765085 3318 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:59:22.765124 kubelet[3318]: E0513 23:59:22.765086 3318 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b3bb28caaa\" not found" May 13 23:59:22.765232 kubelet[3318]: I0513 23:59:22.765190 3318 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:59:22.765773 kubelet[3318]: I0513 23:59:22.765762 3318 reconciler.go:26] "Reconciler: start to sync state" May 13 23:59:22.765814 kubelet[3318]: I0513 23:59:22.765795 3318 factory.go:221] Registration of the systemd container factory successfully May 13 23:59:22.766141 kubelet[3318]: I0513 23:59:22.766132 3318 server.go:455] "Adding debug handlers to kubelet server" May 13 23:59:22.766267 kubelet[3318]: E0513 23:59:22.766254 3318 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:59:22.766403 kubelet[3318]: I0513 23:59:22.766374 3318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:59:22.766995 kubelet[3318]: I0513 23:59:22.766982 3318 factory.go:221] Registration of the containerd container factory successfully May 13 23:59:22.771059 kubelet[3318]: I0513 23:59:22.771035 3318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:59:22.771679 kubelet[3318]: I0513 23:59:22.771667 3318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:59:22.771713 kubelet[3318]: I0513 23:59:22.771690 3318 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:59:22.771713 kubelet[3318]: I0513 23:59:22.771704 3318 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:59:22.771763 kubelet[3318]: E0513 23:59:22.771737 3318 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:59:22.781129 kubelet[3318]: I0513 23:59:22.781088 3318 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:59:22.781129 kubelet[3318]: I0513 23:59:22.781096 3318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:59:22.781129 kubelet[3318]: I0513 23:59:22.781106 3318 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:22.781240 kubelet[3318]: I0513 23:59:22.781192 3318 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:59:22.781240 kubelet[3318]: I0513 23:59:22.781198 3318 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:59:22.781240 kubelet[3318]: I0513 23:59:22.781209 3318 policy_none.go:49] "None policy: Start" May 13 23:59:22.781507 kubelet[3318]: I0513 23:59:22.781459 3318 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:59:22.781507 kubelet[3318]: I0513 23:59:22.781469 3318 state_mem.go:35] "Initializing new in-memory state store" May 13 23:59:22.781556 kubelet[3318]: I0513 23:59:22.781532 3318 state_mem.go:75] "Updated machine memory state" May 13 23:59:22.783337 kubelet[3318]: I0513 23:59:22.783300 3318 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:59:22.783424 kubelet[3318]: I0513 23:59:22.783380 3318 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:59:22.783456 kubelet[3318]: I0513 23:59:22.783433 3318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:59:22.871338 kubelet[3318]: I0513 23:59:22.871272 3318 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.872018 kubelet[3318]: I0513 23:59:22.871880 3318 topology_manager.go:215] "Topology Admit Handler" podUID="ad5351f21afef10fe1082c7d0772729d" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.872250 kubelet[3318]: I0513 23:59:22.872153 3318 topology_manager.go:215] "Topology Admit Handler" podUID="d4592f8c43154697080346e9fcffd1f8" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.872379 kubelet[3318]: I0513 23:59:22.872309 3318 topology_manager.go:215] "Topology Admit Handler" podUID="932cfcbef2b73e71699ca04c6ccd45a0" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.879637 kubelet[3318]: W0513 23:59:22.879571 3318 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:22.879637 kubelet[3318]: W0513 23:59:22.879618 3318 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:22.880012 kubelet[3318]: E0513 23:59:22.879764 3318 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.880576 kubelet[3318]: W0513 23:59:22.880527 3318 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:22.880695 kubelet[3318]: E0513 23:59:22.880669 3318 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.881064 kubelet[3318]: I0513 23:59:22.881020 3318 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.881224 kubelet[3318]: I0513 23:59:22.881183 3318 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966309 kubelet[3318]: I0513 23:59:22.966181 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966542 kubelet[3318]: I0513 23:59:22.966313 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966542 kubelet[3318]: I0513 23:59:22.966414 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/932cfcbef2b73e71699ca04c6ccd45a0-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-b3bb28caaa\" (UID: \"932cfcbef2b73e71699ca04c6ccd45a0\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966542 kubelet[3318]: I0513 23:59:22.966496 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad5351f21afef10fe1082c7d0772729d-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" (UID: \"ad5351f21afef10fe1082c7d0772729d\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966980 kubelet[3318]: I0513 23:59:22.966560 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad5351f21afef10fe1082c7d0772729d-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" (UID: \"ad5351f21afef10fe1082c7d0772729d\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966980 kubelet[3318]: I0513 23:59:22.966685 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad5351f21afef10fe1082c7d0772729d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" (UID: \"ad5351f21afef10fe1082c7d0772729d\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966980 kubelet[3318]: I0513 23:59:22.966790 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.966980 kubelet[3318]: I0513 23:59:22.966867 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:22.967343 kubelet[3318]: I0513 23:59:22.966967 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4592f8c43154697080346e9fcffd1f8-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" (UID: \"d4592f8c43154697080346e9fcffd1f8\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:23.764036 kubelet[3318]: I0513 23:59:23.764014 3318 apiserver.go:52] "Watching apiserver" May 13 23:59:23.765543 kubelet[3318]: I0513 23:59:23.765506 3318 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:59:23.778289 kubelet[3318]: W0513 23:59:23.778274 3318 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:23.778349 kubelet[3318]: W0513 23:59:23.778308 3318 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:23.778349 kubelet[3318]: E0513 23:59:23.778332 3318 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-n-b3bb28caaa\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:23.778399 kubelet[3318]: E0513 23:59:23.778306 3318 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4284.0.0-n-b3bb28caaa\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:23.778555 kubelet[3318]: W0513 23:59:23.778548 3318 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:23.778603 kubelet[3318]: E0513 23:59:23.778587 3318 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-n-b3bb28caaa\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" May 13 23:59:23.786206 kubelet[3318]: I0513 23:59:23.786157 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-b3bb28caaa" podStartSLOduration=1.786132822 podStartE2EDuration="1.786132822s" podCreationTimestamp="2025-05-13 23:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:23.786123602 +0000 UTC m=+1.056868718" watchObservedRunningTime="2025-05-13 23:59:23.786132822 +0000 UTC m=+1.056877939" May 13 23:59:23.794472 kubelet[3318]: I0513 23:59:23.794390 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b3bb28caaa" podStartSLOduration=1.794362058 podStartE2EDuration="1.794362058s" podCreationTimestamp="2025-05-13 23:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:23.79060736 +0000 UTC m=+1.061352470" watchObservedRunningTime="2025-05-13 23:59:23.794362058 +0000 UTC m=+1.065107165" May 13 23:59:23.794472 kubelet[3318]: I0513 23:59:23.794468 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-b3bb28caaa" podStartSLOduration=1.794465738 podStartE2EDuration="1.794465738s" podCreationTimestamp="2025-05-13 23:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:23.794310919 +0000 UTC m=+1.065056029" watchObservedRunningTime="2025-05-13 23:59:23.794465738 +0000 UTC m=+1.065210845" May 13 23:59:27.039204 sudo[2105]: pam_unix(sudo:session): session closed for user root May 13 23:59:27.039875 sshd[2104]: Connection closed by 139.178.68.195 port 35674 May 13 23:59:27.040061 sshd-session[2101]: pam_unix(sshd:session): session closed for user core May 13 23:59:27.041694 systemd[1]: sshd@8-145.40.90.165:22-139.178.68.195:35674.service: Deactivated successfully. May 13 23:59:27.042686 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:59:27.042777 systemd[1]: session-11.scope: Consumed 3.650s CPU time, 248M memory peak. May 13 23:59:27.043815 systemd-logind[1796]: Session 11 logged out. Waiting for processes to exit. May 13 23:59:27.044541 systemd-logind[1796]: Removed session 11. May 13 23:59:34.689821 update_engine[1801]: I20250513 23:59:34.689678 1801 update_attempter.cc:509] Updating boot flags... May 13 23:59:34.728945 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (3485) May 13 23:59:34.754943 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (3485) May 13 23:59:36.907066 kubelet[3318]: I0513 23:59:36.906997 3318 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:59:36.907290 kubelet[3318]: I0513 23:59:36.907267 3318 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:59:36.907312 containerd[1806]: time="2025-05-13T23:59:36.907165519Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:59:37.975387 kubelet[3318]: I0513 23:59:37.975306 3318 topology_manager.go:215] "Topology Admit Handler" podUID="b8d16a5e-d83e-4f63-9481-0ae8dcbb9226" podNamespace="kube-system" podName="kube-proxy-h2vxr" May 13 23:59:37.991788 systemd[1]: Created slice kubepods-besteffort-podb8d16a5e_d83e_4f63_9481_0ae8dcbb9226.slice - libcontainer container kubepods-besteffort-podb8d16a5e_d83e_4f63_9481_0ae8dcbb9226.slice. May 13 23:59:38.075548 kubelet[3318]: I0513 23:59:38.075482 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8d16a5e-d83e-4f63-9481-0ae8dcbb9226-kube-proxy\") pod \"kube-proxy-h2vxr\" (UID: \"b8d16a5e-d83e-4f63-9481-0ae8dcbb9226\") " pod="kube-system/kube-proxy-h2vxr" May 13 23:59:38.075860 kubelet[3318]: I0513 23:59:38.075570 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8d16a5e-d83e-4f63-9481-0ae8dcbb9226-xtables-lock\") pod \"kube-proxy-h2vxr\" (UID: \"b8d16a5e-d83e-4f63-9481-0ae8dcbb9226\") " pod="kube-system/kube-proxy-h2vxr" May 13 23:59:38.075860 kubelet[3318]: I0513 23:59:38.075640 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvl9\" (UniqueName: \"kubernetes.io/projected/b8d16a5e-d83e-4f63-9481-0ae8dcbb9226-kube-api-access-xmvl9\") pod \"kube-proxy-h2vxr\" (UID: \"b8d16a5e-d83e-4f63-9481-0ae8dcbb9226\") " pod="kube-system/kube-proxy-h2vxr" May 13 23:59:38.075860 kubelet[3318]: I0513 23:59:38.075701 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8d16a5e-d83e-4f63-9481-0ae8dcbb9226-lib-modules\") pod \"kube-proxy-h2vxr\" (UID: \"b8d16a5e-d83e-4f63-9481-0ae8dcbb9226\") " pod="kube-system/kube-proxy-h2vxr" May 13 23:59:38.154124 kubelet[3318]: I0513 23:59:38.154062 3318 topology_manager.go:215] "Topology Admit Handler" podUID="1f1d8cfe-0e0b-43e2-b789-346dec6558e3" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-zwqrq" May 13 23:59:38.165716 systemd[1]: Created slice kubepods-besteffort-pod1f1d8cfe_0e0b_43e2_b789_346dec6558e3.slice - libcontainer container kubepods-besteffort-pod1f1d8cfe_0e0b_43e2_b789_346dec6558e3.slice. May 13 23:59:38.176881 kubelet[3318]: I0513 23:59:38.176828 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9csl\" (UniqueName: \"kubernetes.io/projected/1f1d8cfe-0e0b-43e2-b789-346dec6558e3-kube-api-access-l9csl\") pod \"tigera-operator-797db67f8-zwqrq\" (UID: \"1f1d8cfe-0e0b-43e2-b789-346dec6558e3\") " pod="tigera-operator/tigera-operator-797db67f8-zwqrq" May 13 23:59:38.177182 kubelet[3318]: I0513 23:59:38.177030 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f1d8cfe-0e0b-43e2-b789-346dec6558e3-var-lib-calico\") pod \"tigera-operator-797db67f8-zwqrq\" (UID: \"1f1d8cfe-0e0b-43e2-b789-346dec6558e3\") " pod="tigera-operator/tigera-operator-797db67f8-zwqrq" May 13 23:59:38.311572 containerd[1806]: time="2025-05-13T23:59:38.311343783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2vxr,Uid:b8d16a5e-d83e-4f63-9481-0ae8dcbb9226,Namespace:kube-system,Attempt:0,}" May 13 23:59:38.319966 containerd[1806]: time="2025-05-13T23:59:38.319909842Z" level=info msg="connecting to shim 84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87" address="unix:///run/containerd/s/344e1a69264e8485bdc2b21e1bea71ea878098a51b40519a458b302ffe1f07f2" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:38.342235 systemd[1]: Started cri-containerd-84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87.scope - libcontainer container 84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87. May 13 23:59:38.353885 containerd[1806]: time="2025-05-13T23:59:38.353866458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2vxr,Uid:b8d16a5e-d83e-4f63-9481-0ae8dcbb9226,Namespace:kube-system,Attempt:0,} returns sandbox id \"84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87\"" May 13 23:59:38.355293 containerd[1806]: time="2025-05-13T23:59:38.355254856Z" level=info msg="CreateContainer within sandbox \"84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:59:38.359899 containerd[1806]: time="2025-05-13T23:59:38.359858177Z" level=info msg="Container 080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:38.363289 containerd[1806]: time="2025-05-13T23:59:38.363248133Z" level=info msg="CreateContainer within sandbox \"84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc\"" May 13 23:59:38.363600 containerd[1806]: time="2025-05-13T23:59:38.363555312Z" level=info msg="StartContainer for \"080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc\"" May 13 23:59:38.364424 containerd[1806]: time="2025-05-13T23:59:38.364411353Z" level=info msg="connecting to shim 080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc" address="unix:///run/containerd/s/344e1a69264e8485bdc2b21e1bea71ea878098a51b40519a458b302ffe1f07f2" protocol=ttrpc version=3 May 13 23:59:38.388326 systemd[1]: Started cri-containerd-080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc.scope - libcontainer container 080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc. May 13 23:59:38.448656 containerd[1806]: time="2025-05-13T23:59:38.448623051Z" level=info msg="StartContainer for \"080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc\" returns successfully" May 13 23:59:38.470508 containerd[1806]: time="2025-05-13T23:59:38.470474217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zwqrq,Uid:1f1d8cfe-0e0b-43e2-b789-346dec6558e3,Namespace:tigera-operator,Attempt:0,}" May 13 23:59:38.477703 containerd[1806]: time="2025-05-13T23:59:38.477651909Z" level=info msg="connecting to shim c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4" address="unix:///run/containerd/s/2337313b12d673c3db8eb2a898104b7811577113b890b6064bc183867652d81f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:38.494174 systemd[1]: Started cri-containerd-c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4.scope - libcontainer container c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4. May 13 23:59:38.530754 containerd[1806]: time="2025-05-13T23:59:38.530699259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-zwqrq,Uid:1f1d8cfe-0e0b-43e2-b789-346dec6558e3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4\"" May 13 23:59:38.531551 containerd[1806]: time="2025-05-13T23:59:38.531518214Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:59:40.258805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473804956.mount: Deactivated successfully. May 13 23:59:40.512694 containerd[1806]: time="2025-05-13T23:59:40.512633782Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:40.512914 containerd[1806]: time="2025-05-13T23:59:40.512792956Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 23:59:40.513147 containerd[1806]: time="2025-05-13T23:59:40.513134551Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:40.514066 containerd[1806]: time="2025-05-13T23:59:40.514034798Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:40.514709 containerd[1806]: time="2025-05-13T23:59:40.514695420Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.983158844s" May 13 23:59:40.514733 containerd[1806]: time="2025-05-13T23:59:40.514712874Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 23:59:40.515727 containerd[1806]: time="2025-05-13T23:59:40.515687206Z" level=info msg="CreateContainer within sandbox \"c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:59:40.518411 containerd[1806]: time="2025-05-13T23:59:40.518370909Z" level=info msg="Container ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:40.520757 containerd[1806]: time="2025-05-13T23:59:40.520718034Z" level=info msg="CreateContainer within sandbox \"c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660\"" May 13 23:59:40.521008 containerd[1806]: time="2025-05-13T23:59:40.520947503Z" level=info msg="StartContainer for \"ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660\"" May 13 23:59:40.521371 containerd[1806]: time="2025-05-13T23:59:40.521331713Z" level=info msg="connecting to shim ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660" address="unix:///run/containerd/s/2337313b12d673c3db8eb2a898104b7811577113b890b6064bc183867652d81f" protocol=ttrpc version=3 May 13 23:59:40.553099 systemd[1]: Started cri-containerd-ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660.scope - libcontainer container ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660. May 13 23:59:40.566249 containerd[1806]: time="2025-05-13T23:59:40.566229002Z" level=info msg="StartContainer for \"ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660\" returns successfully" May 13 23:59:40.842203 kubelet[3318]: I0513 23:59:40.841928 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h2vxr" podStartSLOduration=3.841890042 podStartE2EDuration="3.841890042s" podCreationTimestamp="2025-05-13 23:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:38.812835845 +0000 UTC m=+16.083580959" watchObservedRunningTime="2025-05-13 23:59:40.841890042 +0000 UTC m=+18.112635208" May 13 23:59:40.843522 kubelet[3318]: I0513 23:59:40.842181 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-zwqrq" podStartSLOduration=0.858334416 podStartE2EDuration="2.842160267s" podCreationTimestamp="2025-05-13 23:59:38 +0000 UTC" firstStartedPulling="2025-05-13 23:59:38.531318013 +0000 UTC m=+15.802063123" lastFinishedPulling="2025-05-13 23:59:40.515143863 +0000 UTC m=+17.785888974" observedRunningTime="2025-05-13 23:59:40.84215449 +0000 UTC m=+18.112899674" watchObservedRunningTime="2025-05-13 23:59:40.842160267 +0000 UTC m=+18.112905436" May 13 23:59:43.365432 kubelet[3318]: I0513 23:59:43.363039 3318 topology_manager.go:215] "Topology Admit Handler" podUID="aba479de-1a4c-40c8-beea-512d2a1f65e2" podNamespace="calico-system" podName="calico-typha-646db7d88f-jt4xj" May 13 23:59:43.380574 systemd[1]: Created slice kubepods-besteffort-podaba479de_1a4c_40c8_beea_512d2a1f65e2.slice - libcontainer container kubepods-besteffort-podaba479de_1a4c_40c8_beea_512d2a1f65e2.slice. May 13 23:59:43.397677 kubelet[3318]: I0513 23:59:43.397654 3318 topology_manager.go:215] "Topology Admit Handler" podUID="c20e3d18-f741-49ec-8c7a-0cdefb89145d" podNamespace="calico-system" podName="calico-node-zxsmg" May 13 23:59:43.401331 systemd[1]: Created slice kubepods-besteffort-podc20e3d18_f741_49ec_8c7a_0cdefb89145d.slice - libcontainer container kubepods-besteffort-podc20e3d18_f741_49ec_8c7a_0cdefb89145d.slice. May 13 23:59:43.415050 kubelet[3318]: I0513 23:59:43.415027 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-xtables-lock\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415050 kubelet[3318]: I0513 23:59:43.415050 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-policysync\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415145 kubelet[3318]: I0513 23:59:43.415060 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c20e3d18-f741-49ec-8c7a-0cdefb89145d-tigera-ca-bundle\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415145 kubelet[3318]: I0513 23:59:43.415069 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-var-lib-calico\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415145 kubelet[3318]: I0513 23:59:43.415077 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-cni-bin-dir\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415145 kubelet[3318]: I0513 23:59:43.415086 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aba479de-1a4c-40c8-beea-512d2a1f65e2-typha-certs\") pod \"calico-typha-646db7d88f-jt4xj\" (UID: \"aba479de-1a4c-40c8-beea-512d2a1f65e2\") " pod="calico-system/calico-typha-646db7d88f-jt4xj" May 13 23:59:43.415145 kubelet[3318]: I0513 23:59:43.415095 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c20e3d18-f741-49ec-8c7a-0cdefb89145d-node-certs\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415230 kubelet[3318]: I0513 23:59:43.415103 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-cni-net-dir\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415230 kubelet[3318]: I0513 23:59:43.415112 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aba479de-1a4c-40c8-beea-512d2a1f65e2-tigera-ca-bundle\") pod \"calico-typha-646db7d88f-jt4xj\" (UID: \"aba479de-1a4c-40c8-beea-512d2a1f65e2\") " pod="calico-system/calico-typha-646db7d88f-jt4xj" May 13 23:59:43.415230 kubelet[3318]: I0513 23:59:43.415122 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-var-run-calico\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415230 kubelet[3318]: I0513 23:59:43.415131 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-cni-log-dir\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415230 kubelet[3318]: I0513 23:59:43.415139 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chjv\" (UniqueName: \"kubernetes.io/projected/c20e3d18-f741-49ec-8c7a-0cdefb89145d-kube-api-access-2chjv\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415315 kubelet[3318]: I0513 23:59:43.415161 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdr5\" (UniqueName: \"kubernetes.io/projected/aba479de-1a4c-40c8-beea-512d2a1f65e2-kube-api-access-xhdr5\") pod \"calico-typha-646db7d88f-jt4xj\" (UID: \"aba479de-1a4c-40c8-beea-512d2a1f65e2\") " pod="calico-system/calico-typha-646db7d88f-jt4xj" May 13 23:59:43.415315 kubelet[3318]: I0513 23:59:43.415178 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-lib-modules\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.415315 kubelet[3318]: I0513 23:59:43.415188 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c20e3d18-f741-49ec-8c7a-0cdefb89145d-flexvol-driver-host\") pod \"calico-node-zxsmg\" (UID: \"c20e3d18-f741-49ec-8c7a-0cdefb89145d\") " pod="calico-system/calico-node-zxsmg" May 13 23:59:43.519044 kubelet[3318]: E0513 23:59:43.518899 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.519044 kubelet[3318]: W0513 23:59:43.518984 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.519044 kubelet[3318]: E0513 23:59:43.519043 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.519776 kubelet[3318]: E0513 23:59:43.519687 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.519776 kubelet[3318]: W0513 23:59:43.519723 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.519776 kubelet[3318]: E0513 23:59:43.519760 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.523757 kubelet[3318]: E0513 23:59:43.523670 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.523757 kubelet[3318]: W0513 23:59:43.523717 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.523757 kubelet[3318]: E0513 23:59:43.523768 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.524477 kubelet[3318]: E0513 23:59:43.524424 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.524681 kubelet[3318]: W0513 23:59:43.524471 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.524681 kubelet[3318]: E0513 23:59:43.524535 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.544951 kubelet[3318]: I0513 23:59:43.544889 3318 topology_manager.go:215] "Topology Admit Handler" podUID="40b59446-8721-42bb-b5e6-453736118f74" podNamespace="calico-system" podName="csi-node-driver-2pp6m" May 13 23:59:43.545474 kubelet[3318]: E0513 23:59:43.545430 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2pp6m" podUID="40b59446-8721-42bb-b5e6-453736118f74" May 13 23:59:43.546762 kubelet[3318]: E0513 23:59:43.546728 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.546762 kubelet[3318]: W0513 23:59:43.546758 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.547159 kubelet[3318]: E0513 23:59:43.546810 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.547855 kubelet[3318]: E0513 23:59:43.547828 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.547855 kubelet[3318]: W0513 23:59:43.547854 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.548052 kubelet[3318]: E0513 23:59:43.547890 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.607235 kubelet[3318]: E0513 23:59:43.607209 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.607235 kubelet[3318]: W0513 23:59:43.607232 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.607401 kubelet[3318]: E0513 23:59:43.607251 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.607475 kubelet[3318]: E0513 23:59:43.607461 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.607513 kubelet[3318]: W0513 23:59:43.607475 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.607513 kubelet[3318]: E0513 23:59:43.607487 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.607730 kubelet[3318]: E0513 23:59:43.607718 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.607772 kubelet[3318]: W0513 23:59:43.607731 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.607772 kubelet[3318]: E0513 23:59:43.607743 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.607965 kubelet[3318]: E0513 23:59:43.607954 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.607965 kubelet[3318]: W0513 23:59:43.607964 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.608047 kubelet[3318]: E0513 23:59:43.607974 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.608207 kubelet[3318]: E0513 23:59:43.608195 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.608248 kubelet[3318]: W0513 23:59:43.608208 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.608248 kubelet[3318]: E0513 23:59:43.608220 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.608478 kubelet[3318]: E0513 23:59:43.608441 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.608478 kubelet[3318]: W0513 23:59:43.608454 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.608478 kubelet[3318]: E0513 23:59:43.608466 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.608675 kubelet[3318]: E0513 23:59:43.608663 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.608675 kubelet[3318]: W0513 23:59:43.608675 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.608754 kubelet[3318]: E0513 23:59:43.608684 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.608840 kubelet[3318]: E0513 23:59:43.608829 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.608840 kubelet[3318]: W0513 23:59:43.608839 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.608938 kubelet[3318]: E0513 23:59:43.608848 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.609150 kubelet[3318]: E0513 23:59:43.609133 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.609150 kubelet[3318]: W0513 23:59:43.609147 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.609273 kubelet[3318]: E0513 23:59:43.609165 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.609351 kubelet[3318]: E0513 23:59:43.609337 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.609351 kubelet[3318]: W0513 23:59:43.609348 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.609460 kubelet[3318]: E0513 23:59:43.609362 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.609554 kubelet[3318]: E0513 23:59:43.609542 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.609554 kubelet[3318]: W0513 23:59:43.609552 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.609669 kubelet[3318]: E0513 23:59:43.609566 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.609769 kubelet[3318]: E0513 23:59:43.609756 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.609769 kubelet[3318]: W0513 23:59:43.609767 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.609889 kubelet[3318]: E0513 23:59:43.609782 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.609987 kubelet[3318]: E0513 23:59:43.609973 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.609987 kubelet[3318]: W0513 23:59:43.609984 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.610094 kubelet[3318]: E0513 23:59:43.609997 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.610184 kubelet[3318]: E0513 23:59:43.610172 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.610184 kubelet[3318]: W0513 23:59:43.610182 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.610301 kubelet[3318]: E0513 23:59:43.610203 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.610380 kubelet[3318]: E0513 23:59:43.610368 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.610438 kubelet[3318]: W0513 23:59:43.610380 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.610438 kubelet[3318]: E0513 23:59:43.610393 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.610595 kubelet[3318]: E0513 23:59:43.610583 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.610595 kubelet[3318]: W0513 23:59:43.610593 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.610712 kubelet[3318]: E0513 23:59:43.610606 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.610819 kubelet[3318]: E0513 23:59:43.610806 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.610819 kubelet[3318]: W0513 23:59:43.610817 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.610947 kubelet[3318]: E0513 23:59:43.610830 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.611028 kubelet[3318]: E0513 23:59:43.611016 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.611094 kubelet[3318]: W0513 23:59:43.611027 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.611094 kubelet[3318]: E0513 23:59:43.611041 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.611222 kubelet[3318]: E0513 23:59:43.611211 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.611222 kubelet[3318]: W0513 23:59:43.611221 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.611336 kubelet[3318]: E0513 23:59:43.611236 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.611461 kubelet[3318]: E0513 23:59:43.611448 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.611461 kubelet[3318]: W0513 23:59:43.611459 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.611565 kubelet[3318]: E0513 23:59:43.611472 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.616774 kubelet[3318]: E0513 23:59:43.616707 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.616774 kubelet[3318]: W0513 23:59:43.616722 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.616774 kubelet[3318]: E0513 23:59:43.616735 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.616774 kubelet[3318]: I0513 23:59:43.616761 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40b59446-8721-42bb-b5e6-453736118f74-kubelet-dir\") pod \"csi-node-driver-2pp6m\" (UID: \"40b59446-8721-42bb-b5e6-453736118f74\") " pod="calico-system/csi-node-driver-2pp6m" May 13 23:59:43.616975 kubelet[3318]: E0513 23:59:43.616959 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.616975 kubelet[3318]: W0513 23:59:43.616971 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.617109 kubelet[3318]: E0513 23:59:43.616984 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.617109 kubelet[3318]: I0513 23:59:43.617006 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40b59446-8721-42bb-b5e6-453736118f74-registration-dir\") pod \"csi-node-driver-2pp6m\" (UID: \"40b59446-8721-42bb-b5e6-453736118f74\") " pod="calico-system/csi-node-driver-2pp6m" May 13 23:59:43.617210 kubelet[3318]: E0513 23:59:43.617185 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.617210 kubelet[3318]: W0513 23:59:43.617195 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.617210 kubelet[3318]: E0513 23:59:43.617207 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.617354 kubelet[3318]: I0513 23:59:43.617231 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40b59446-8721-42bb-b5e6-453736118f74-varrun\") pod \"csi-node-driver-2pp6m\" (UID: \"40b59446-8721-42bb-b5e6-453736118f74\") " pod="calico-system/csi-node-driver-2pp6m" May 13 23:59:43.617466 kubelet[3318]: E0513 23:59:43.617451 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.617466 kubelet[3318]: W0513 23:59:43.617464 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.617578 kubelet[3318]: E0513 23:59:43.617479 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.617668 kubelet[3318]: E0513 23:59:43.617652 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.617668 kubelet[3318]: W0513 23:59:43.617667 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.617762 kubelet[3318]: E0513 23:59:43.617686 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.617914 kubelet[3318]: E0513 23:59:43.617901 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.617914 kubelet[3318]: W0513 23:59:43.617912 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.618035 kubelet[3318]: E0513 23:59:43.617924 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.618130 kubelet[3318]: E0513 23:59:43.618117 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.618130 kubelet[3318]: W0513 23:59:43.618128 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.618239 kubelet[3318]: E0513 23:59:43.618140 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.618321 kubelet[3318]: E0513 23:59:43.618308 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.618321 kubelet[3318]: W0513 23:59:43.618318 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.618438 kubelet[3318]: E0513 23:59:43.618330 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.618438 kubelet[3318]: I0513 23:59:43.618349 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40b59446-8721-42bb-b5e6-453736118f74-socket-dir\") pod \"csi-node-driver-2pp6m\" (UID: \"40b59446-8721-42bb-b5e6-453736118f74\") " pod="calico-system/csi-node-driver-2pp6m" May 13 23:59:43.618533 kubelet[3318]: E0513 23:59:43.618521 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.618593 kubelet[3318]: W0513 23:59:43.618535 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.618593 kubelet[3318]: E0513 23:59:43.618561 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.618665 kubelet[3318]: I0513 23:59:43.618604 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68nm\" (UniqueName: \"kubernetes.io/projected/40b59446-8721-42bb-b5e6-453736118f74-kube-api-access-f68nm\") pod \"csi-node-driver-2pp6m\" (UID: \"40b59446-8721-42bb-b5e6-453736118f74\") " pod="calico-system/csi-node-driver-2pp6m" May 13 23:59:43.618718 kubelet[3318]: E0513 23:59:43.618693 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.618718 kubelet[3318]: W0513 23:59:43.618702 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.618817 kubelet[3318]: E0513 23:59:43.618721 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.618881 kubelet[3318]: E0513 23:59:43.618860 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.618881 kubelet[3318]: W0513 23:59:43.618869 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.618881 kubelet[3318]: E0513 23:59:43.618880 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.619072 kubelet[3318]: E0513 23:59:43.619059 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.619132 kubelet[3318]: W0513 23:59:43.619072 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.619132 kubelet[3318]: E0513 23:59:43.619090 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.619248 kubelet[3318]: E0513 23:59:43.619237 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.619248 kubelet[3318]: W0513 23:59:43.619246 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.619329 kubelet[3318]: E0513 23:59:43.619256 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.619427 kubelet[3318]: E0513 23:59:43.619415 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.619467 kubelet[3318]: W0513 23:59:43.619429 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.619467 kubelet[3318]: E0513 23:59:43.619444 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.619655 kubelet[3318]: E0513 23:59:43.619644 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.619698 kubelet[3318]: W0513 23:59:43.619656 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.619698 kubelet[3318]: E0513 23:59:43.619671 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.684421 containerd[1806]: time="2025-05-13T23:59:43.684305895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646db7d88f-jt4xj,Uid:aba479de-1a4c-40c8-beea-512d2a1f65e2,Namespace:calico-system,Attempt:0,}" May 13 23:59:43.692555 containerd[1806]: time="2025-05-13T23:59:43.692532911Z" level=info msg="connecting to shim 5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10" address="unix:///run/containerd/s/130c00fe215b3a466b790fd6e00915799f04fb5d167240e5ec313be8e43c859d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:43.702726 containerd[1806]: time="2025-05-13T23:59:43.702698282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxsmg,Uid:c20e3d18-f741-49ec-8c7a-0cdefb89145d,Namespace:calico-system,Attempt:0,}" May 13 23:59:43.709788 containerd[1806]: time="2025-05-13T23:59:43.709766957Z" level=info msg="connecting to shim ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8" address="unix:///run/containerd/s/96a63e0ee360359213bc4134065da56a172addb47bc3f35081f98115d140c44f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:43.714065 systemd[1]: Started cri-containerd-5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10.scope - libcontainer container 5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10. May 13 23:59:43.717255 systemd[1]: Started cri-containerd-ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8.scope - libcontainer container ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8. May 13 23:59:43.719890 kubelet[3318]: E0513 23:59:43.719873 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.719890 kubelet[3318]: W0513 23:59:43.719886 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.719994 kubelet[3318]: E0513 23:59:43.719899 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720058 kubelet[3318]: E0513 23:59:43.720050 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720058 kubelet[3318]: W0513 23:59:43.720055 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720156 kubelet[3318]: E0513 23:59:43.720062 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720156 kubelet[3318]: E0513 23:59:43.720143 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720156 kubelet[3318]: W0513 23:59:43.720147 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720156 kubelet[3318]: E0513 23:59:43.720152 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720281 kubelet[3318]: E0513 23:59:43.720236 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720281 kubelet[3318]: W0513 23:59:43.720242 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720281 kubelet[3318]: E0513 23:59:43.720248 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720372 kubelet[3318]: E0513 23:59:43.720340 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720372 kubelet[3318]: W0513 23:59:43.720347 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720372 kubelet[3318]: E0513 23:59:43.720356 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720457 kubelet[3318]: E0513 23:59:43.720427 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720457 kubelet[3318]: W0513 23:59:43.720431 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720457 kubelet[3318]: E0513 23:59:43.720437 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720548 kubelet[3318]: E0513 23:59:43.720503 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720548 kubelet[3318]: W0513 23:59:43.720509 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720548 kubelet[3318]: E0513 23:59:43.720518 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720636 kubelet[3318]: E0513 23:59:43.720627 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720636 kubelet[3318]: W0513 23:59:43.720632 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720694 kubelet[3318]: E0513 23:59:43.720639 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720746 kubelet[3318]: E0513 23:59:43.720737 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720746 kubelet[3318]: W0513 23:59:43.720744 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720809 kubelet[3318]: E0513 23:59:43.720752 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720842 kubelet[3318]: E0513 23:59:43.720838 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720876 kubelet[3318]: W0513 23:59:43.720844 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720876 kubelet[3318]: E0513 23:59:43.720853 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.720941 kubelet[3318]: E0513 23:59:43.720938 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.720977 kubelet[3318]: W0513 23:59:43.720944 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.720977 kubelet[3318]: E0513 23:59:43.720953 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721132 kubelet[3318]: E0513 23:59:43.721123 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721132 kubelet[3318]: W0513 23:59:43.721130 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721195 kubelet[3318]: E0513 23:59:43.721139 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721238 kubelet[3318]: E0513 23:59:43.721230 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721238 kubelet[3318]: W0513 23:59:43.721236 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721301 kubelet[3318]: E0513 23:59:43.721245 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721357 kubelet[3318]: E0513 23:59:43.721349 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721357 kubelet[3318]: W0513 23:59:43.721355 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721422 kubelet[3318]: E0513 23:59:43.721369 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721452 kubelet[3318]: E0513 23:59:43.721439 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721452 kubelet[3318]: W0513 23:59:43.721444 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721507 kubelet[3318]: E0513 23:59:43.721455 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721532 kubelet[3318]: E0513 23:59:43.721527 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721553 kubelet[3318]: W0513 23:59:43.721532 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721553 kubelet[3318]: E0513 23:59:43.721543 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721633 kubelet[3318]: E0513 23:59:43.721626 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721651 kubelet[3318]: W0513 23:59:43.721632 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721651 kubelet[3318]: E0513 23:59:43.721641 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721735 kubelet[3318]: E0513 23:59:43.721729 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721752 kubelet[3318]: W0513 23:59:43.721735 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721752 kubelet[3318]: E0513 23:59:43.721744 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721866 kubelet[3318]: E0513 23:59:43.721860 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.721885 kubelet[3318]: W0513 23:59:43.721867 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.721885 kubelet[3318]: E0513 23:59:43.721876 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.721975 kubelet[3318]: E0513 23:59:43.721969 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.722000 kubelet[3318]: W0513 23:59:43.721983 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.722000 kubelet[3318]: E0513 23:59:43.721992 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.722082 kubelet[3318]: E0513 23:59:43.722077 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.722100 kubelet[3318]: W0513 23:59:43.722083 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.722100 kubelet[3318]: E0513 23:59:43.722091 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.722195 kubelet[3318]: E0513 23:59:43.722190 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.722212 kubelet[3318]: W0513 23:59:43.722196 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.722212 kubelet[3318]: E0513 23:59:43.722204 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.722358 kubelet[3318]: E0513 23:59:43.722349 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.722382 kubelet[3318]: W0513 23:59:43.722357 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.722382 kubelet[3318]: E0513 23:59:43.722364 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.722462 kubelet[3318]: E0513 23:59:43.722457 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.722479 kubelet[3318]: W0513 23:59:43.722464 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.722479 kubelet[3318]: E0513 23:59:43.722472 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.722575 kubelet[3318]: E0513 23:59:43.722569 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.722575 kubelet[3318]: W0513 23:59:43.722574 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.722615 kubelet[3318]: E0513 23:59:43.722579 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.728039 kubelet[3318]: E0513 23:59:43.728024 3318 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:59:43.728039 kubelet[3318]: W0513 23:59:43.728036 3318 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:59:43.728138 kubelet[3318]: E0513 23:59:43.728051 3318 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:59:43.730461 containerd[1806]: time="2025-05-13T23:59:43.730443047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxsmg,Uid:c20e3d18-f741-49ec-8c7a-0cdefb89145d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8\"" May 13 23:59:43.731214 containerd[1806]: time="2025-05-13T23:59:43.731203538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:59:43.739992 containerd[1806]: time="2025-05-13T23:59:43.739940372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646db7d88f-jt4xj,Uid:aba479de-1a4c-40c8-beea-512d2a1f65e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10\"" May 13 23:59:45.164531 containerd[1806]: time="2025-05-13T23:59:45.164472099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:45.164733 containerd[1806]: time="2025-05-13T23:59:45.164674074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 23:59:45.165044 containerd[1806]: time="2025-05-13T23:59:45.165005944Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:45.165762 containerd[1806]: time="2025-05-13T23:59:45.165752043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:45.166189 containerd[1806]: time="2025-05-13T23:59:45.166153609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.434934825s" May 13 23:59:45.166189 containerd[1806]: time="2025-05-13T23:59:45.166169006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 23:59:45.166721 containerd[1806]: time="2025-05-13T23:59:45.166712424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:59:45.167290 containerd[1806]: time="2025-05-13T23:59:45.167277893Z" level=info msg="CreateContainer within sandbox \"ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:59:45.170662 containerd[1806]: time="2025-05-13T23:59:45.170618346Z" level=info msg="Container 2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:45.174164 containerd[1806]: time="2025-05-13T23:59:45.174152368Z" level=info msg="CreateContainer within sandbox \"ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256\"" May 13 23:59:45.174425 containerd[1806]: time="2025-05-13T23:59:45.174379296Z" level=info msg="StartContainer for \"2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256\"" May 13 23:59:45.175246 containerd[1806]: time="2025-05-13T23:59:45.175200632Z" level=info msg="connecting to shim 2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256" address="unix:///run/containerd/s/96a63e0ee360359213bc4134065da56a172addb47bc3f35081f98115d140c44f" protocol=ttrpc version=3 May 13 23:59:45.196122 systemd[1]: Started cri-containerd-2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256.scope - libcontainer container 2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256. May 13 23:59:45.216232 containerd[1806]: time="2025-05-13T23:59:45.216208679Z" level=info msg="StartContainer for \"2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256\" returns successfully" May 13 23:59:45.220600 systemd[1]: cri-containerd-2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256.scope: Deactivated successfully. May 13 23:59:45.222111 containerd[1806]: time="2025-05-13T23:59:45.222090938Z" level=info msg="received exit event container_id:\"2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256\" id:\"2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256\" pid:4000 exited_at:{seconds:1747180785 nanos:221838463}" May 13 23:59:45.222160 containerd[1806]: time="2025-05-13T23:59:45.222142838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256\" id:\"2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256\" pid:4000 exited_at:{seconds:1747180785 nanos:221838463}" May 13 23:59:45.234917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256-rootfs.mount: Deactivated successfully. May 13 23:59:45.773031 kubelet[3318]: E0513 23:59:45.772894 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2pp6m" podUID="40b59446-8721-42bb-b5e6-453736118f74" May 13 23:59:46.868247 containerd[1806]: time="2025-05-13T23:59:46.868225329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:46.868469 containerd[1806]: time="2025-05-13T23:59:46.868446553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 23:59:46.868830 containerd[1806]: time="2025-05-13T23:59:46.868819485Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:46.869641 containerd[1806]: time="2025-05-13T23:59:46.869631946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:46.870011 containerd[1806]: time="2025-05-13T23:59:46.870000418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.703274418s" May 13 23:59:46.870035 containerd[1806]: time="2025-05-13T23:59:46.870015509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 23:59:46.870449 containerd[1806]: time="2025-05-13T23:59:46.870408666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:59:46.873430 containerd[1806]: time="2025-05-13T23:59:46.873411010Z" level=info msg="CreateContainer within sandbox \"5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:59:46.876920 containerd[1806]: time="2025-05-13T23:59:46.876875678Z" level=info msg="Container 37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:46.879780 containerd[1806]: time="2025-05-13T23:59:46.879735920Z" level=info msg="CreateContainer within sandbox \"5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522\"" May 13 23:59:46.879991 containerd[1806]: time="2025-05-13T23:59:46.879978236Z" level=info msg="StartContainer for \"37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522\"" May 13 23:59:46.880533 containerd[1806]: time="2025-05-13T23:59:46.880493331Z" level=info msg="connecting to shim 37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522" address="unix:///run/containerd/s/130c00fe215b3a466b790fd6e00915799f04fb5d167240e5ec313be8e43c859d" protocol=ttrpc version=3 May 13 23:59:46.899121 systemd[1]: Started cri-containerd-37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522.scope - libcontainer container 37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522. May 13 23:59:46.928039 containerd[1806]: time="2025-05-13T23:59:46.928017398Z" level=info msg="StartContainer for \"37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522\" returns successfully" May 13 23:59:47.773039 kubelet[3318]: E0513 23:59:47.772886 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2pp6m" podUID="40b59446-8721-42bb-b5e6-453736118f74" May 13 23:59:47.847075 kubelet[3318]: I0513 23:59:47.847031 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-646db7d88f-jt4xj" podStartSLOduration=1.717064435 podStartE2EDuration="4.847013329s" podCreationTimestamp="2025-05-13 23:59:43 +0000 UTC" firstStartedPulling="2025-05-13 23:59:43.740414481 +0000 UTC m=+21.011159592" lastFinishedPulling="2025-05-13 23:59:46.870363376 +0000 UTC m=+24.141108486" observedRunningTime="2025-05-13 23:59:47.846952863 +0000 UTC m=+25.117697995" watchObservedRunningTime="2025-05-13 23:59:47.847013329 +0000 UTC m=+25.117758441" May 13 23:59:48.841612 kubelet[3318]: I0513 23:59:48.841595 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:59:49.406436 containerd[1806]: time="2025-05-13T23:59:49.406409834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:49.406703 containerd[1806]: time="2025-05-13T23:59:49.406624301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 23:59:49.406922 containerd[1806]: time="2025-05-13T23:59:49.406909027Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:49.407942 containerd[1806]: time="2025-05-13T23:59:49.407898888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:49.408327 containerd[1806]: time="2025-05-13T23:59:49.408286147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 2.537864474s" May 13 23:59:49.408327 containerd[1806]: time="2025-05-13T23:59:49.408302325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 23:59:49.409296 containerd[1806]: time="2025-05-13T23:59:49.409277669Z" level=info msg="CreateContainer within sandbox \"ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:59:49.412025 containerd[1806]: time="2025-05-13T23:59:49.411983617Z" level=info msg="Container 7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:49.415808 containerd[1806]: time="2025-05-13T23:59:49.415760726Z" level=info msg="CreateContainer within sandbox \"ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e\"" May 13 23:59:49.416003 containerd[1806]: time="2025-05-13T23:59:49.415990793Z" level=info msg="StartContainer for \"7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e\"" May 13 23:59:49.416763 containerd[1806]: time="2025-05-13T23:59:49.416723277Z" level=info msg="connecting to shim 7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e" address="unix:///run/containerd/s/96a63e0ee360359213bc4134065da56a172addb47bc3f35081f98115d140c44f" protocol=ttrpc version=3 May 13 23:59:49.442114 systemd[1]: Started cri-containerd-7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e.scope - libcontainer container 7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e. May 13 23:59:49.462650 containerd[1806]: time="2025-05-13T23:59:49.462598731Z" level=info msg="StartContainer for \"7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e\" returns successfully" May 13 23:59:49.772683 kubelet[3318]: E0513 23:59:49.772604 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2pp6m" podUID="40b59446-8721-42bb-b5e6-453736118f74" May 13 23:59:49.988414 systemd[1]: cri-containerd-7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e.scope: Deactivated successfully. May 13 23:59:49.988569 systemd[1]: cri-containerd-7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e.scope: Consumed 299ms CPU time, 174.5M memory peak, 154M written to disk. May 13 23:59:49.988965 containerd[1806]: time="2025-05-13T23:59:49.988946364Z" level=info msg="received exit event container_id:\"7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e\" id:\"7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e\" pid:4110 exited_at:{seconds:1747180789 nanos:988822618}" May 13 23:59:49.989017 containerd[1806]: time="2025-05-13T23:59:49.988987786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e\" id:\"7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e\" pid:4110 exited_at:{seconds:1747180789 nanos:988822618}" May 13 23:59:49.999040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e-rootfs.mount: Deactivated successfully. May 13 23:59:50.068906 kubelet[3318]: I0513 23:59:50.068732 3318 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:59:50.103431 kubelet[3318]: I0513 23:59:50.103198 3318 topology_manager.go:215] "Topology Admit Handler" podUID="b161d0e4-6b7f-4a21-940e-8067e2d429f0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jglx5" May 13 23:59:50.104683 kubelet[3318]: I0513 23:59:50.104517 3318 topology_manager.go:215] "Topology Admit Handler" podUID="eeb06b7d-a287-4a77-ba64-d7b5972b0c22" podNamespace="calico-system" podName="calico-kube-controllers-57dd79b879-2b2bb" May 13 23:59:50.105855 kubelet[3318]: I0513 23:59:50.105812 3318 topology_manager.go:215] "Topology Admit Handler" podUID="87368ed2-27ae-4001-a327-916530deb705" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qzcw7" May 13 23:59:50.107265 kubelet[3318]: I0513 23:59:50.107194 3318 topology_manager.go:215] "Topology Admit Handler" podUID="a73156dd-b6ff-4af1-a5b7-3bd9608545c3" podNamespace="calico-apiserver" podName="calico-apiserver-5789dbfcdc-4lt4z" May 13 23:59:50.108157 kubelet[3318]: I0513 23:59:50.108099 3318 topology_manager.go:215] "Topology Admit Handler" podUID="cd8d5765-d693-4561-9e05-07c50e8eabdf" podNamespace="calico-apiserver" podName="calico-apiserver-5789dbfcdc-p49jv" May 13 23:59:50.121864 systemd[1]: Created slice kubepods-burstable-podb161d0e4_6b7f_4a21_940e_8067e2d429f0.slice - libcontainer container kubepods-burstable-podb161d0e4_6b7f_4a21_940e_8067e2d429f0.slice. May 13 23:59:50.135311 systemd[1]: Created slice kubepods-besteffort-podeeb06b7d_a287_4a77_ba64_d7b5972b0c22.slice - libcontainer container kubepods-besteffort-podeeb06b7d_a287_4a77_ba64_d7b5972b0c22.slice. May 13 23:59:50.144120 systemd[1]: Created slice kubepods-burstable-pod87368ed2_27ae_4001_a327_916530deb705.slice - libcontainer container kubepods-burstable-pod87368ed2_27ae_4001_a327_916530deb705.slice. May 13 23:59:50.150582 systemd[1]: Created slice kubepods-besteffort-poda73156dd_b6ff_4af1_a5b7_3bd9608545c3.slice - libcontainer container kubepods-besteffort-poda73156dd_b6ff_4af1_a5b7_3bd9608545c3.slice. May 13 23:59:50.154268 systemd[1]: Created slice kubepods-besteffort-podcd8d5765_d693_4561_9e05_07c50e8eabdf.slice - libcontainer container kubepods-besteffort-podcd8d5765_d693_4561_9e05_07c50e8eabdf.slice. May 13 23:59:50.263104 kubelet[3318]: I0513 23:59:50.263032 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a73156dd-b6ff-4af1-a5b7-3bd9608545c3-calico-apiserver-certs\") pod \"calico-apiserver-5789dbfcdc-4lt4z\" (UID: \"a73156dd-b6ff-4af1-a5b7-3bd9608545c3\") " pod="calico-apiserver/calico-apiserver-5789dbfcdc-4lt4z" May 13 23:59:50.263317 kubelet[3318]: I0513 23:59:50.263110 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5gs\" (UniqueName: \"kubernetes.io/projected/cd8d5765-d693-4561-9e05-07c50e8eabdf-kube-api-access-9j5gs\") pod \"calico-apiserver-5789dbfcdc-p49jv\" (UID: \"cd8d5765-d693-4561-9e05-07c50e8eabdf\") " pod="calico-apiserver/calico-apiserver-5789dbfcdc-p49jv" May 13 23:59:50.263317 kubelet[3318]: I0513 23:59:50.263163 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87368ed2-27ae-4001-a327-916530deb705-config-volume\") pod \"coredns-7db6d8ff4d-qzcw7\" (UID: \"87368ed2-27ae-4001-a327-916530deb705\") " pod="kube-system/coredns-7db6d8ff4d-qzcw7" May 13 23:59:50.263317 kubelet[3318]: I0513 23:59:50.263195 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfwg\" (UniqueName: \"kubernetes.io/projected/b161d0e4-6b7f-4a21-940e-8067e2d429f0-kube-api-access-wnfwg\") pod \"coredns-7db6d8ff4d-jglx5\" (UID: \"b161d0e4-6b7f-4a21-940e-8067e2d429f0\") " pod="kube-system/coredns-7db6d8ff4d-jglx5" May 13 23:59:50.263317 kubelet[3318]: I0513 23:59:50.263230 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jq4p\" (UniqueName: \"kubernetes.io/projected/eeb06b7d-a287-4a77-ba64-d7b5972b0c22-kube-api-access-7jq4p\") pod \"calico-kube-controllers-57dd79b879-2b2bb\" (UID: \"eeb06b7d-a287-4a77-ba64-d7b5972b0c22\") " pod="calico-system/calico-kube-controllers-57dd79b879-2b2bb" May 13 23:59:50.263317 kubelet[3318]: I0513 23:59:50.263284 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klzvg\" (UniqueName: \"kubernetes.io/projected/a73156dd-b6ff-4af1-a5b7-3bd9608545c3-kube-api-access-klzvg\") pod \"calico-apiserver-5789dbfcdc-4lt4z\" (UID: \"a73156dd-b6ff-4af1-a5b7-3bd9608545c3\") " pod="calico-apiserver/calico-apiserver-5789dbfcdc-4lt4z" May 13 23:59:50.263518 kubelet[3318]: I0513 23:59:50.263318 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd8d5765-d693-4561-9e05-07c50e8eabdf-calico-apiserver-certs\") pod \"calico-apiserver-5789dbfcdc-p49jv\" (UID: \"cd8d5765-d693-4561-9e05-07c50e8eabdf\") " pod="calico-apiserver/calico-apiserver-5789dbfcdc-p49jv" May 13 23:59:50.263518 kubelet[3318]: I0513 23:59:50.263352 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeb06b7d-a287-4a77-ba64-d7b5972b0c22-tigera-ca-bundle\") pod \"calico-kube-controllers-57dd79b879-2b2bb\" (UID: \"eeb06b7d-a287-4a77-ba64-d7b5972b0c22\") " pod="calico-system/calico-kube-controllers-57dd79b879-2b2bb" May 13 23:59:50.263518 kubelet[3318]: I0513 23:59:50.263375 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b161d0e4-6b7f-4a21-940e-8067e2d429f0-config-volume\") pod \"coredns-7db6d8ff4d-jglx5\" (UID: \"b161d0e4-6b7f-4a21-940e-8067e2d429f0\") " pod="kube-system/coredns-7db6d8ff4d-jglx5" May 13 23:59:50.263518 kubelet[3318]: I0513 23:59:50.263401 3318 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsdm\" (UniqueName: \"kubernetes.io/projected/87368ed2-27ae-4001-a327-916530deb705-kube-api-access-jnsdm\") pod \"coredns-7db6d8ff4d-qzcw7\" (UID: \"87368ed2-27ae-4001-a327-916530deb705\") " pod="kube-system/coredns-7db6d8ff4d-qzcw7" May 13 23:59:50.429845 containerd[1806]: time="2025-05-13T23:59:50.429743561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jglx5,Uid:b161d0e4-6b7f-4a21-940e-8067e2d429f0,Namespace:kube-system,Attempt:0,}" May 13 23:59:50.440131 containerd[1806]: time="2025-05-13T23:59:50.440018612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57dd79b879-2b2bb,Uid:eeb06b7d-a287-4a77-ba64-d7b5972b0c22,Namespace:calico-system,Attempt:0,}" May 13 23:59:50.448256 containerd[1806]: time="2025-05-13T23:59:50.448154980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzcw7,Uid:87368ed2-27ae-4001-a327-916530deb705,Namespace:kube-system,Attempt:0,}" May 13 23:59:50.474292 containerd[1806]: time="2025-05-13T23:59:50.474182744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-4lt4z,Uid:a73156dd-b6ff-4af1-a5b7-3bd9608545c3,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:50.474292 containerd[1806]: time="2025-05-13T23:59:50.474275342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-p49jv,Uid:cd8d5765-d693-4561-9e05-07c50e8eabdf,Namespace:calico-apiserver,Attempt:0,}" May 13 23:59:50.710070 containerd[1806]: time="2025-05-13T23:59:50.709980437Z" level=error msg="Failed to destroy network for sandbox \"56a80e8000ab2cd1e384cf95d49cc2d48e222f186c5a1ab1d47ff364e4bb07ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710367 containerd[1806]: time="2025-05-13T23:59:50.710347642Z" level=error msg="Failed to destroy network for sandbox \"29c5c16b8cd9d52f1910b33e148536e8a45f69a2210a77dcd1e6fddfe070feb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710451 containerd[1806]: time="2025-05-13T23:59:50.710433417Z" level=error msg="Failed to destroy network for sandbox \"c7d2f9491ba110cc49981f387892350bc0028f74ab71a16e02ee3d4388da0ad7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710479 containerd[1806]: time="2025-05-13T23:59:50.710446106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57dd79b879-2b2bb,Uid:eeb06b7d-a287-4a77-ba64-d7b5972b0c22,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a80e8000ab2cd1e384cf95d49cc2d48e222f186c5a1ab1d47ff364e4bb07ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710527 containerd[1806]: time="2025-05-13T23:59:50.710507511Z" level=error msg="Failed to destroy network for sandbox \"860c81cd0bcac664462f454325bb214faf03bc28cd5e12655131598ce93853a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710636 kubelet[3318]: E0513 23:59:50.710600 3318 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a80e8000ab2cd1e384cf95d49cc2d48e222f186c5a1ab1d47ff364e4bb07ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710670 kubelet[3318]: E0513 23:59:50.710643 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a80e8000ab2cd1e384cf95d49cc2d48e222f186c5a1ab1d47ff364e4bb07ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57dd79b879-2b2bb" May 13 23:59:50.710670 kubelet[3318]: E0513 23:59:50.710656 3318 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a80e8000ab2cd1e384cf95d49cc2d48e222f186c5a1ab1d47ff364e4bb07ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57dd79b879-2b2bb" May 13 23:59:50.710744 kubelet[3318]: E0513 23:59:50.710682 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57dd79b879-2b2bb_calico-system(eeb06b7d-a287-4a77-ba64-d7b5972b0c22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57dd79b879-2b2bb_calico-system(eeb06b7d-a287-4a77-ba64-d7b5972b0c22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56a80e8000ab2cd1e384cf95d49cc2d48e222f186c5a1ab1d47ff364e4bb07ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57dd79b879-2b2bb" podUID="eeb06b7d-a287-4a77-ba64-d7b5972b0c22" May 13 23:59:50.710807 containerd[1806]: time="2025-05-13T23:59:50.710765104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-4lt4z,Uid:a73156dd-b6ff-4af1-a5b7-3bd9608545c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29c5c16b8cd9d52f1910b33e148536e8a45f69a2210a77dcd1e6fddfe070feb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710884 kubelet[3318]: E0513 23:59:50.710869 3318 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29c5c16b8cd9d52f1910b33e148536e8a45f69a2210a77dcd1e6fddfe070feb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.710920 kubelet[3318]: E0513 23:59:50.710891 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29c5c16b8cd9d52f1910b33e148536e8a45f69a2210a77dcd1e6fddfe070feb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789dbfcdc-4lt4z" May 13 23:59:50.710920 kubelet[3318]: E0513 23:59:50.710902 3318 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29c5c16b8cd9d52f1910b33e148536e8a45f69a2210a77dcd1e6fddfe070feb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789dbfcdc-4lt4z" May 13 23:59:50.710981 kubelet[3318]: E0513 23:59:50.710924 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5789dbfcdc-4lt4z_calico-apiserver(a73156dd-b6ff-4af1-a5b7-3bd9608545c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5789dbfcdc-4lt4z_calico-apiserver(a73156dd-b6ff-4af1-a5b7-3bd9608545c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29c5c16b8cd9d52f1910b33e148536e8a45f69a2210a77dcd1e6fddfe070feb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5789dbfcdc-4lt4z" podUID="a73156dd-b6ff-4af1-a5b7-3bd9608545c3" May 13 23:59:50.711043 containerd[1806]: time="2025-05-13T23:59:50.711029721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzcw7,Uid:87368ed2-27ae-4001-a327-916530deb705,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d2f9491ba110cc49981f387892350bc0028f74ab71a16e02ee3d4388da0ad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.711134 kubelet[3318]: E0513 23:59:50.711121 3318 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d2f9491ba110cc49981f387892350bc0028f74ab71a16e02ee3d4388da0ad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.711159 kubelet[3318]: E0513 23:59:50.711140 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d2f9491ba110cc49981f387892350bc0028f74ab71a16e02ee3d4388da0ad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzcw7" May 13 23:59:50.711159 kubelet[3318]: E0513 23:59:50.711150 3318 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d2f9491ba110cc49981f387892350bc0028f74ab71a16e02ee3d4388da0ad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzcw7" May 13 23:59:50.711204 kubelet[3318]: E0513 23:59:50.711167 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qzcw7_kube-system(87368ed2-27ae-4001-a327-916530deb705)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qzcw7_kube-system(87368ed2-27ae-4001-a327-916530deb705)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7d2f9491ba110cc49981f387892350bc0028f74ab71a16e02ee3d4388da0ad7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qzcw7" podUID="87368ed2-27ae-4001-a327-916530deb705" May 13 23:59:50.711316 containerd[1806]: time="2025-05-13T23:59:50.711299321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jglx5,Uid:b161d0e4-6b7f-4a21-940e-8067e2d429f0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"860c81cd0bcac664462f454325bb214faf03bc28cd5e12655131598ce93853a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.711415 kubelet[3318]: E0513 23:59:50.711380 3318 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860c81cd0bcac664462f454325bb214faf03bc28cd5e12655131598ce93853a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.711415 kubelet[3318]: E0513 23:59:50.711396 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860c81cd0bcac664462f454325bb214faf03bc28cd5e12655131598ce93853a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jglx5" May 13 23:59:50.711415 kubelet[3318]: E0513 23:59:50.711405 3318 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860c81cd0bcac664462f454325bb214faf03bc28cd5e12655131598ce93853a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jglx5" May 13 23:59:50.711480 kubelet[3318]: E0513 23:59:50.711420 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jglx5_kube-system(b161d0e4-6b7f-4a21-940e-8067e2d429f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jglx5_kube-system(b161d0e4-6b7f-4a21-940e-8067e2d429f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"860c81cd0bcac664462f454325bb214faf03bc28cd5e12655131598ce93853a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jglx5" podUID="b161d0e4-6b7f-4a21-940e-8067e2d429f0" May 13 23:59:50.712691 containerd[1806]: time="2025-05-13T23:59:50.712654108Z" level=error msg="Failed to destroy network for sandbox \"19abf7586f192c46b5b8ce59ac65921bc132b963f489680d796527d8877a1071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.713035 containerd[1806]: time="2025-05-13T23:59:50.713001200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-p49jv,Uid:cd8d5765-d693-4561-9e05-07c50e8eabdf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19abf7586f192c46b5b8ce59ac65921bc132b963f489680d796527d8877a1071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.713200 kubelet[3318]: E0513 23:59:50.713123 3318 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19abf7586f192c46b5b8ce59ac65921bc132b963f489680d796527d8877a1071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:50.713200 kubelet[3318]: E0513 23:59:50.713143 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19abf7586f192c46b5b8ce59ac65921bc132b963f489680d796527d8877a1071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789dbfcdc-p49jv" May 13 23:59:50.713200 kubelet[3318]: E0513 23:59:50.713166 3318 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19abf7586f192c46b5b8ce59ac65921bc132b963f489680d796527d8877a1071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5789dbfcdc-p49jv" May 13 23:59:50.713298 kubelet[3318]: E0513 23:59:50.713182 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5789dbfcdc-p49jv_calico-apiserver(cd8d5765-d693-4561-9e05-07c50e8eabdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5789dbfcdc-p49jv_calico-apiserver(cd8d5765-d693-4561-9e05-07c50e8eabdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19abf7586f192c46b5b8ce59ac65921bc132b963f489680d796527d8877a1071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5789dbfcdc-p49jv" podUID="cd8d5765-d693-4561-9e05-07c50e8eabdf" May 13 23:59:50.865438 containerd[1806]: time="2025-05-13T23:59:50.865362989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:59:51.399456 kubelet[3318]: I0513 23:59:51.399340 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:59:51.414385 systemd[1]: run-netns-cni\x2d8faf1b9a\x2dfc7a\x2d03a5\x2d1f59\x2d66303d4e39c3.mount: Deactivated successfully. May 13 23:59:51.414437 systemd[1]: run-netns-cni\x2ddd0d110e\x2d70dc\x2d34ef\x2dad8c\x2da7f1f102b331.mount: Deactivated successfully. May 13 23:59:51.414475 systemd[1]: run-netns-cni\x2dc87603ca\x2d9e51\x2d5020\x2dceb8\x2d2d3f5fcf59ec.mount: Deactivated successfully. May 13 23:59:51.414508 systemd[1]: run-netns-cni\x2dd672b6ef\x2d24ec\x2dc18e\x2d141f\x2df4d2c393f189.mount: Deactivated successfully. May 13 23:59:51.414540 systemd[1]: run-netns-cni\x2df3aedbed\x2d8e19\x2d424e\x2d820b\x2d8db7a0d889e3.mount: Deactivated successfully. May 13 23:59:51.787419 systemd[1]: Created slice kubepods-besteffort-pod40b59446_8721_42bb_b5e6_453736118f74.slice - libcontainer container kubepods-besteffort-pod40b59446_8721_42bb_b5e6_453736118f74.slice. May 13 23:59:51.792963 containerd[1806]: time="2025-05-13T23:59:51.792849452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2pp6m,Uid:40b59446-8721-42bb-b5e6-453736118f74,Namespace:calico-system,Attempt:0,}" May 13 23:59:51.819424 containerd[1806]: time="2025-05-13T23:59:51.819373194Z" level=error msg="Failed to destroy network for sandbox \"bdb544ddb07e40a935735acf022515b22bd47da7cbfbf78e77c548af36941fe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.819871 containerd[1806]: time="2025-05-13T23:59:51.819850633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2pp6m,Uid:40b59446-8721-42bb-b5e6-453736118f74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb544ddb07e40a935735acf022515b22bd47da7cbfbf78e77c548af36941fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.820097 kubelet[3318]: E0513 23:59:51.820035 3318 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb544ddb07e40a935735acf022515b22bd47da7cbfbf78e77c548af36941fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:59:51.820097 kubelet[3318]: E0513 23:59:51.820076 3318 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb544ddb07e40a935735acf022515b22bd47da7cbfbf78e77c548af36941fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2pp6m" May 13 23:59:51.820097 kubelet[3318]: E0513 23:59:51.820091 3318 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb544ddb07e40a935735acf022515b22bd47da7cbfbf78e77c548af36941fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2pp6m" May 13 23:59:51.820183 kubelet[3318]: E0513 23:59:51.820118 3318 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2pp6m_calico-system(40b59446-8721-42bb-b5e6-453736118f74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2pp6m_calico-system(40b59446-8721-42bb-b5e6-453736118f74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdb544ddb07e40a935735acf022515b22bd47da7cbfbf78e77c548af36941fe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2pp6m" podUID="40b59446-8721-42bb-b5e6-453736118f74" May 13 23:59:51.820726 systemd[1]: run-netns-cni\x2d1a014693\x2da3e5\x2dd6d9\x2de090\x2d5ab625921e76.mount: Deactivated successfully. May 13 23:59:53.983360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001073327.mount: Deactivated successfully. May 13 23:59:54.004212 containerd[1806]: time="2025-05-13T23:59:54.004189793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:54.004443 containerd[1806]: time="2025-05-13T23:59:54.004418917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 23:59:54.004723 containerd[1806]: time="2025-05-13T23:59:54.004708929Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:54.005477 containerd[1806]: time="2025-05-13T23:59:54.005464874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:54.005814 containerd[1806]: time="2025-05-13T23:59:54.005805377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 3.140378614s" May 13 23:59:54.005838 containerd[1806]: time="2025-05-13T23:59:54.005818220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 23:59:54.009483 containerd[1806]: time="2025-05-13T23:59:54.009466565Z" level=info msg="CreateContainer within sandbox \"ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:59:54.013919 containerd[1806]: time="2025-05-13T23:59:54.013904239Z" level=info msg="Container b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:54.018349 containerd[1806]: time="2025-05-13T23:59:54.018336722Z" level=info msg="CreateContainer within sandbox \"ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\"" May 13 23:59:54.018609 containerd[1806]: time="2025-05-13T23:59:54.018596224Z" level=info msg="StartContainer for \"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\"" May 13 23:59:54.019402 containerd[1806]: time="2025-05-13T23:59:54.019389994Z" level=info msg="connecting to shim b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d" address="unix:///run/containerd/s/96a63e0ee360359213bc4134065da56a172addb47bc3f35081f98115d140c44f" protocol=ttrpc version=3 May 13 23:59:54.034233 systemd[1]: Started cri-containerd-b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d.scope - libcontainer container b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d. May 13 23:59:54.056279 containerd[1806]: time="2025-05-13T23:59:54.056248996Z" level=info msg="StartContainer for \"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" returns successfully" May 13 23:59:54.130092 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:59:54.130141 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:59:54.893839 kubelet[3318]: I0513 23:59:54.893788 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zxsmg" podStartSLOduration=1.6187091150000001 podStartE2EDuration="11.893777976s" podCreationTimestamp="2025-05-13 23:59:43 +0000 UTC" firstStartedPulling="2025-05-13 23:59:43.731065933 +0000 UTC m=+21.001811044" lastFinishedPulling="2025-05-13 23:59:54.006134794 +0000 UTC m=+31.276879905" observedRunningTime="2025-05-13 23:59:54.893659145 +0000 UTC m=+32.164404268" watchObservedRunningTime="2025-05-13 23:59:54.893777976 +0000 UTC m=+32.164523084" May 13 23:59:55.366986 kernel: bpftool[4707]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:59:55.513750 systemd-networkd[1727]: vxlan.calico: Link UP May 13 23:59:55.513753 systemd-networkd[1727]: vxlan.calico: Gained carrier May 13 23:59:55.881668 kubelet[3318]: I0513 23:59:55.881603 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:59:56.632215 systemd-networkd[1727]: vxlan.calico: Gained IPv6LL May 14 00:00:00.133838 kubelet[3318]: I0514 00:00:00.133751 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:00:00.190215 containerd[1806]: time="2025-05-14T00:00:00.190188059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"9ff9d86c2aaf0baa71af50d1dba0da3f6f0bdcab17f084adc4632fe5da573a61\" pid:4824 exited_at:{seconds:1747180800 nanos:189968093}" May 14 00:00:00.230985 containerd[1806]: time="2025-05-14T00:00:00.230926061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"9c2f98dba28d2896b27a6989579e8ff1b55e95cb20c8a87b1f7013b57ef26f61\" pid:4853 exited_at:{seconds:1747180800 nanos:230767156}" May 14 00:00:01.774337 containerd[1806]: time="2025-05-14T00:00:01.774229595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jglx5,Uid:b161d0e4-6b7f-4a21-940e-8067e2d429f0,Namespace:kube-system,Attempt:0,}" May 14 00:00:01.783277 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:01.801634 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:01.834539 systemd-networkd[1727]: cali07a016ab1e8: Link UP May 14 00:00:01.834662 systemd-networkd[1727]: cali07a016ab1e8: Gained carrier May 14 00:00:01.840725 containerd[1806]: 2025-05-14 00:00:01.797 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0 coredns-7db6d8ff4d- kube-system b161d0e4-6b7f-4a21-940e-8067e2d429f0 654 0 2025-05-13 23:59:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-b3bb28caaa coredns-7db6d8ff4d-jglx5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali07a016ab1e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-" May 14 00:00:01.840725 containerd[1806]: 2025-05-14 00:00:01.797 [INFO][4879] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" May 14 00:00:01.840725 containerd[1806]: 2025-05-14 00:00:01.812 [INFO][4901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" HandleID="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.817 [INFO][4901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" HandleID="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000296300), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-b3bb28caaa", "pod":"coredns-7db6d8ff4d-jglx5", "timestamp":"2025-05-14 00:00:01.812298438 +0000 UTC"}, Hostname:"ci-4284.0.0-n-b3bb28caaa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.817 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.817 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.817 [INFO][4901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-b3bb28caaa' May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.819 [INFO][4901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.821 [INFO][4901] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.823 [INFO][4901] ipam/ipam.go 489: Trying affinity for 192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.825 [INFO][4901] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.840905 containerd[1806]: 2025-05-14 00:00:01.826 [INFO][4901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.841147 containerd[1806]: 2025-05-14 00:00:01.826 [INFO][4901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.841147 containerd[1806]: 2025-05-14 00:00:01.826 [INFO][4901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181 May 14 00:00:01.841147 containerd[1806]: 2025-05-14 00:00:01.829 [INFO][4901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.841147 containerd[1806]: 2025-05-14 00:00:01.831 [INFO][4901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.129/26] block=192.168.97.128/26 handle="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.841147 containerd[1806]: 2025-05-14 00:00:01.831 [INFO][4901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.129/26] handle="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:01.841147 containerd[1806]: 2025-05-14 00:00:01.831 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:01.841147 containerd[1806]: 2025-05-14 00:00:01.831 [INFO][4901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.129/26] IPv6=[] ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" HandleID="k8s-pod-network.b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" May 14 00:00:01.841284 containerd[1806]: 2025-05-14 00:00:01.833 [INFO][4879] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b161d0e4-6b7f-4a21-940e-8067e2d429f0", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"", Pod:"coredns-7db6d8ff4d-jglx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07a016ab1e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:01.841284 containerd[1806]: 2025-05-14 00:00:01.833 [INFO][4879] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.129/32] ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" May 14 00:00:01.841284 containerd[1806]: 2025-05-14 00:00:01.833 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07a016ab1e8 ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" May 14 00:00:01.841284 containerd[1806]: 2025-05-14 00:00:01.834 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" May 14 00:00:01.841284 containerd[1806]: 2025-05-14 00:00:01.834 [INFO][4879] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b161d0e4-6b7f-4a21-940e-8067e2d429f0", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181", Pod:"coredns-7db6d8ff4d-jglx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07a016ab1e8", MAC:"9e:18:a8:9b:67:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:01.841284 containerd[1806]: 2025-05-14 00:00:01.839 [INFO][4879] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jglx5" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--jglx5-eth0" May 14 00:00:01.849364 containerd[1806]: time="2025-05-14T00:00:01.849339665Z" level=info msg="connecting to shim b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181" address="unix:///run/containerd/s/dc703ea0f6d30762826049f89358f9f1194e270f1ff8b5dfe86321262eb32b8f" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:01.875209 systemd[1]: Started cri-containerd-b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181.scope - libcontainer container b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181. May 14 00:00:01.912292 containerd[1806]: time="2025-05-14T00:00:01.912239921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jglx5,Uid:b161d0e4-6b7f-4a21-940e-8067e2d429f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181\"" May 14 00:00:01.913534 containerd[1806]: time="2025-05-14T00:00:01.913497955Z" level=info msg="CreateContainer within sandbox \"b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:01.916639 containerd[1806]: time="2025-05-14T00:00:01.916626431Z" level=info msg="Container dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:01.918759 containerd[1806]: time="2025-05-14T00:00:01.918711921Z" level=info msg="CreateContainer within sandbox \"b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3\"" May 14 00:00:01.919006 containerd[1806]: time="2025-05-14T00:00:01.918976913Z" level=info msg="StartContainer for \"dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3\"" May 14 00:00:01.919433 containerd[1806]: time="2025-05-14T00:00:01.919391557Z" level=info msg="connecting to shim dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3" address="unix:///run/containerd/s/dc703ea0f6d30762826049f89358f9f1194e270f1ff8b5dfe86321262eb32b8f" protocol=ttrpc version=3 May 14 00:00:01.937228 systemd[1]: Started cri-containerd-dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3.scope - libcontainer container dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3. May 14 00:00:01.950304 containerd[1806]: time="2025-05-14T00:00:01.950281524Z" level=info msg="StartContainer for \"dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3\" returns successfully" May 14 00:00:02.772751 containerd[1806]: time="2025-05-14T00:00:02.772726620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2pp6m,Uid:40b59446-8721-42bb-b5e6-453736118f74,Namespace:calico-system,Attempt:0,}" May 14 00:00:02.772838 containerd[1806]: time="2025-05-14T00:00:02.772726760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57dd79b879-2b2bb,Uid:eeb06b7d-a287-4a77-ba64-d7b5972b0c22,Namespace:calico-system,Attempt:0,}" May 14 00:00:02.834476 systemd-networkd[1727]: cali37ef81f2a99: Link UP May 14 00:00:02.834604 systemd-networkd[1727]: cali37ef81f2a99: Gained carrier May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.793 [INFO][5023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0 csi-node-driver- calico-system 40b59446-8721-42bb-b5e6-453736118f74 593 0 2025-05-13 23:59:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-n-b3bb28caaa csi-node-driver-2pp6m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali37ef81f2a99 [] []}} ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.793 [INFO][5023] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.809 [INFO][5070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" HandleID="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.815 [INFO][5070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" HandleID="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051480), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-b3bb28caaa", "pod":"csi-node-driver-2pp6m", "timestamp":"2025-05-14 00:00:02.809299675 +0000 UTC"}, Hostname:"ci-4284.0.0-n-b3bb28caaa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.815 [INFO][5070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.816 [INFO][5070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.816 [INFO][5070] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-b3bb28caaa' May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.817 [INFO][5070] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.820 [INFO][5070] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.823 [INFO][5070] ipam/ipam.go 489: Trying affinity for 192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.824 [INFO][5070] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.826 [INFO][5070] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.826 [INFO][5070] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.827 [INFO][5070] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02 May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.829 [INFO][5070] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.832 [INFO][5070] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.130/26] block=192.168.97.128/26 handle="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.832 [INFO][5070] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.130/26] handle="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.832 [INFO][5070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:02.839928 containerd[1806]: 2025-05-14 00:00:02.832 [INFO][5070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.130/26] IPv6=[] ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" HandleID="k8s-pod-network.5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" May 14 00:00:02.840480 containerd[1806]: 2025-05-14 00:00:02.833 [INFO][5023] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40b59446-8721-42bb-b5e6-453736118f74", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"", Pod:"csi-node-driver-2pp6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37ef81f2a99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:02.840480 containerd[1806]: 2025-05-14 00:00:02.833 [INFO][5023] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.130/32] ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" May 14 00:00:02.840480 containerd[1806]: 2025-05-14 00:00:02.833 [INFO][5023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37ef81f2a99 ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" May 14 00:00:02.840480 containerd[1806]: 2025-05-14 00:00:02.834 [INFO][5023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" May 14 00:00:02.840480 containerd[1806]: 2025-05-14 00:00:02.834 [INFO][5023] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40b59446-8721-42bb-b5e6-453736118f74", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02", Pod:"csi-node-driver-2pp6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37ef81f2a99", MAC:"4e:74:7f:9a:98:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:02.840480 containerd[1806]: 2025-05-14 00:00:02.839 [INFO][5023] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" Namespace="calico-system" Pod="csi-node-driver-2pp6m" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-csi--node--driver--2pp6m-eth0" May 14 00:00:02.849511 containerd[1806]: time="2025-05-14T00:00:02.849480833Z" level=info msg="connecting to shim 5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02" address="unix:///run/containerd/s/3517730195673814e27ca23788bb31582e8c52ce4776be0ec989c08a931b1f8b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:02.849645 systemd-networkd[1727]: calia87e56e60b8: Link UP May 14 00:00:02.849770 systemd-networkd[1727]: calia87e56e60b8: Gained carrier May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.793 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0 calico-kube-controllers-57dd79b879- calico-system eeb06b7d-a287-4a77-ba64-d7b5972b0c22 660 0 2025-05-13 23:59:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57dd79b879 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-b3bb28caaa calico-kube-controllers-57dd79b879-2b2bb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia87e56e60b8 [] []}} ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.793 [INFO][5028] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.809 [INFO][5068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" HandleID="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.816 [INFO][5068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" HandleID="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-b3bb28caaa", "pod":"calico-kube-controllers-57dd79b879-2b2bb", "timestamp":"2025-05-14 00:00:02.809381629 +0000 UTC"}, Hostname:"ci-4284.0.0-n-b3bb28caaa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.816 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.832 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.832 [INFO][5068] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-b3bb28caaa' May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.833 [INFO][5068] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.836 [INFO][5068] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.839 [INFO][5068] ipam/ipam.go 489: Trying affinity for 192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.840 [INFO][5068] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.841 [INFO][5068] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.841 [INFO][5068] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.842 [INFO][5068] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249 May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.844 [INFO][5068] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.847 [INFO][5068] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.131/26] block=192.168.97.128/26 handle="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.847 [INFO][5068] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.131/26] handle="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.847 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:02.855831 containerd[1806]: 2025-05-14 00:00:02.847 [INFO][5068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.131/26] IPv6=[] ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" HandleID="k8s-pod-network.21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" May 14 00:00:02.856297 containerd[1806]: 2025-05-14 00:00:02.848 [INFO][5028] cni-plugin/k8s.go 386: Populated endpoint ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0", GenerateName:"calico-kube-controllers-57dd79b879-", Namespace:"calico-system", SelfLink:"", UID:"eeb06b7d-a287-4a77-ba64-d7b5972b0c22", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57dd79b879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"", Pod:"calico-kube-controllers-57dd79b879-2b2bb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia87e56e60b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:02.856297 containerd[1806]: 2025-05-14 00:00:02.848 [INFO][5028] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.131/32] ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" May 14 00:00:02.856297 containerd[1806]: 2025-05-14 00:00:02.848 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia87e56e60b8 ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" May 14 00:00:02.856297 containerd[1806]: 2025-05-14 00:00:02.849 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" May 14 00:00:02.856297 containerd[1806]: 2025-05-14 00:00:02.849 [INFO][5028] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0", GenerateName:"calico-kube-controllers-57dd79b879-", Namespace:"calico-system", SelfLink:"", UID:"eeb06b7d-a287-4a77-ba64-d7b5972b0c22", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57dd79b879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249", Pod:"calico-kube-controllers-57dd79b879-2b2bb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia87e56e60b8", MAC:"fa:42:2e:ef:9f:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:02.856297 containerd[1806]: 2025-05-14 00:00:02.855 [INFO][5028] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" Namespace="calico-system" Pod="calico-kube-controllers-57dd79b879-2b2bb" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--kube--controllers--57dd79b879--2b2bb-eth0" May 14 00:00:02.864491 containerd[1806]: time="2025-05-14T00:00:02.864437115Z" level=info msg="connecting to shim 21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249" address="unix:///run/containerd/s/4810604a308b6b6b631362c99c6c19b69855fce3c87382f094df422e528ba32c" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:02.866082 systemd[1]: Started cri-containerd-5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02.scope - libcontainer container 5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02. May 14 00:00:02.873004 systemd[1]: Started cri-containerd-21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249.scope - libcontainer container 21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249. May 14 00:00:02.877837 containerd[1806]: time="2025-05-14T00:00:02.877764753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2pp6m,Uid:40b59446-8721-42bb-b5e6-453736118f74,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02\"" May 14 00:00:02.878513 containerd[1806]: time="2025-05-14T00:00:02.878471156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 00:00:02.899098 containerd[1806]: time="2025-05-14T00:00:02.899075223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57dd79b879-2b2bb,Uid:eeb06b7d-a287-4a77-ba64-d7b5972b0c22,Namespace:calico-system,Attempt:0,} returns sandbox id \"21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249\"" May 14 00:00:02.921413 kubelet[3318]: I0514 00:00:02.921374 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jglx5" podStartSLOduration=24.921361596 podStartE2EDuration="24.921361596s" podCreationTimestamp="2025-05-13 23:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:02.921085011 +0000 UTC m=+40.191830130" watchObservedRunningTime="2025-05-14 00:00:02.921361596 +0000 UTC m=+40.192106704" May 14 00:00:03.773823 containerd[1806]: time="2025-05-14T00:00:03.773737252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-4lt4z,Uid:a73156dd-b6ff-4af1-a5b7-3bd9608545c3,Namespace:calico-apiserver,Attempt:0,}" May 14 00:00:03.828510 systemd-networkd[1727]: cali0eaa20cb3f0: Link UP May 14 00:00:03.828634 systemd-networkd[1727]: cali0eaa20cb3f0: Gained carrier May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.791 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0 calico-apiserver-5789dbfcdc- calico-apiserver a73156dd-b6ff-4af1-a5b7-3bd9608545c3 658 0 2025-05-13 23:59:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5789dbfcdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-b3bb28caaa calico-apiserver-5789dbfcdc-4lt4z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0eaa20cb3f0 [] []}} ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.791 [INFO][5224] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.806 [INFO][5249] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" HandleID="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.811 [INFO][5249] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" HandleID="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-b3bb28caaa", "pod":"calico-apiserver-5789dbfcdc-4lt4z", "timestamp":"2025-05-14 00:00:03.806795235 +0000 UTC"}, Hostname:"ci-4284.0.0-n-b3bb28caaa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.811 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.811 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.811 [INFO][5249] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-b3bb28caaa' May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.812 [INFO][5249] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.814 [INFO][5249] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.818 [INFO][5249] ipam/ipam.go 489: Trying affinity for 192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.819 [INFO][5249] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.820 [INFO][5249] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.821 [INFO][5249] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.822 [INFO][5249] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45 May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.824 [INFO][5249] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.826 [INFO][5249] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.132/26] block=192.168.97.128/26 handle="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.826 [INFO][5249] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.132/26] handle="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.826 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:03.833504 containerd[1806]: 2025-05-14 00:00:03.826 [INFO][5249] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.132/26] IPv6=[] ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" HandleID="k8s-pod-network.f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" May 14 00:00:03.833895 containerd[1806]: 2025-05-14 00:00:03.827 [INFO][5224] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0", GenerateName:"calico-apiserver-5789dbfcdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73156dd-b6ff-4af1-a5b7-3bd9608545c3", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789dbfcdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"", Pod:"calico-apiserver-5789dbfcdc-4lt4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0eaa20cb3f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:03.833895 containerd[1806]: 2025-05-14 00:00:03.827 [INFO][5224] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.132/32] ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" May 14 00:00:03.833895 containerd[1806]: 2025-05-14 00:00:03.827 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0eaa20cb3f0 ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" May 14 00:00:03.833895 containerd[1806]: 2025-05-14 00:00:03.828 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" May 14 00:00:03.833895 containerd[1806]: 2025-05-14 00:00:03.828 [INFO][5224] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0", GenerateName:"calico-apiserver-5789dbfcdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a73156dd-b6ff-4af1-a5b7-3bd9608545c3", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789dbfcdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45", Pod:"calico-apiserver-5789dbfcdc-4lt4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0eaa20cb3f0", MAC:"4e:ff:02:06:8f:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:03.833895 containerd[1806]: 2025-05-14 00:00:03.832 [INFO][5224] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-4lt4z" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--4lt4z-eth0" May 14 00:00:03.843160 containerd[1806]: time="2025-05-14T00:00:03.843133115Z" level=info msg="connecting to shim f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45" address="unix:///run/containerd/s/29e52c90b688f5ba6ba0483854f8bee3ed1ac7eb8818e0a546a1ee5f1c96aaf8" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:03.859113 systemd[1]: Started cri-containerd-f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45.scope - libcontainer container f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45. May 14 00:00:03.865016 systemd-networkd[1727]: cali07a016ab1e8: Gained IPv6LL May 14 00:00:03.891959 containerd[1806]: time="2025-05-14T00:00:03.891928569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-4lt4z,Uid:a73156dd-b6ff-4af1-a5b7-3bd9608545c3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45\"" May 14 00:00:03.928025 systemd-networkd[1727]: cali37ef81f2a99: Gained IPv6LL May 14 00:00:04.184138 systemd-networkd[1727]: calia87e56e60b8: Gained IPv6LL May 14 00:00:04.952229 systemd-networkd[1727]: cali0eaa20cb3f0: Gained IPv6LL May 14 00:00:05.773314 containerd[1806]: time="2025-05-14T00:00:05.773223488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-p49jv,Uid:cd8d5765-d693-4561-9e05-07c50e8eabdf,Namespace:calico-apiserver,Attempt:0,}" May 14 00:00:05.773314 containerd[1806]: time="2025-05-14T00:00:05.773283995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzcw7,Uid:87368ed2-27ae-4001-a327-916530deb705,Namespace:kube-system,Attempt:0,}" May 14 00:00:05.833577 systemd-networkd[1727]: cali61560d150eb: Link UP May 14 00:00:05.833709 systemd-networkd[1727]: cali61560d150eb: Gained carrier May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.792 [INFO][5333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0 calico-apiserver-5789dbfcdc- calico-apiserver cd8d5765-d693-4561-9e05-07c50e8eabdf 659 0 2025-05-13 23:59:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5789dbfcdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-b3bb28caaa calico-apiserver-5789dbfcdc-p49jv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61560d150eb [] []}} ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.792 [INFO][5333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.809 [INFO][5376] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" HandleID="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.815 [INFO][5376] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" HandleID="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029a1f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-b3bb28caaa", "pod":"calico-apiserver-5789dbfcdc-p49jv", "timestamp":"2025-05-14 00:00:05.809696867 +0000 UTC"}, Hostname:"ci-4284.0.0-n-b3bb28caaa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.815 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.815 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.815 [INFO][5376] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-b3bb28caaa' May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.816 [INFO][5376] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.819 [INFO][5376] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.822 [INFO][5376] ipam/ipam.go 489: Trying affinity for 192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.823 [INFO][5376] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.824 [INFO][5376] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.825 [INFO][5376] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.826 [INFO][5376] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.828 [INFO][5376] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.831 [INFO][5376] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.133/26] block=192.168.97.128/26 handle="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.831 [INFO][5376] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.133/26] handle="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.831 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:05.838073 containerd[1806]: 2025-05-14 00:00:05.831 [INFO][5376] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.133/26] IPv6=[] ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" HandleID="k8s-pod-network.c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" May 14 00:00:05.838483 containerd[1806]: 2025-05-14 00:00:05.832 [INFO][5333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0", GenerateName:"calico-apiserver-5789dbfcdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd8d5765-d693-4561-9e05-07c50e8eabdf", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789dbfcdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"", Pod:"calico-apiserver-5789dbfcdc-p49jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61560d150eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.838483 containerd[1806]: 2025-05-14 00:00:05.832 [INFO][5333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.133/32] ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" May 14 00:00:05.838483 containerd[1806]: 2025-05-14 00:00:05.832 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61560d150eb ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" May 14 00:00:05.838483 containerd[1806]: 2025-05-14 00:00:05.833 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" May 14 00:00:05.838483 containerd[1806]: 2025-05-14 00:00:05.833 [INFO][5333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0", GenerateName:"calico-apiserver-5789dbfcdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd8d5765-d693-4561-9e05-07c50e8eabdf", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5789dbfcdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e", Pod:"calico-apiserver-5789dbfcdc-p49jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61560d150eb", MAC:"22:9e:98:eb:23:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.838483 containerd[1806]: 2025-05-14 00:00:05.837 [INFO][5333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" Namespace="calico-apiserver" Pod="calico-apiserver-5789dbfcdc-p49jv" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-calico--apiserver--5789dbfcdc--p49jv-eth0" May 14 00:00:05.847568 containerd[1806]: time="2025-05-14T00:00:05.847541800Z" level=info msg="connecting to shim c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e" address="unix:///run/containerd/s/486ab5a09d31386364e4d07b0f22cdc0bc806e1dde6a0e92afe809ba22aec1a7" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:05.848758 systemd-networkd[1727]: cali10dbbb4462c: Link UP May 14 00:00:05.848897 systemd-networkd[1727]: cali10dbbb4462c: Gained carrier May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.792 [INFO][5326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0 coredns-7db6d8ff4d- kube-system 87368ed2-27ae-4001-a327-916530deb705 657 0 2025-05-13 23:59:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-b3bb28caaa coredns-7db6d8ff4d-qzcw7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10dbbb4462c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.792 [INFO][5326] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.809 [INFO][5378] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" HandleID="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.815 [INFO][5378] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" HandleID="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000228ce0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-b3bb28caaa", "pod":"coredns-7db6d8ff4d-qzcw7", "timestamp":"2025-05-14 00:00:05.809697474 +0000 UTC"}, Hostname:"ci-4284.0.0-n-b3bb28caaa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.815 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.831 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.831 [INFO][5378] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-b3bb28caaa' May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.832 [INFO][5378] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.835 [INFO][5378] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.837 [INFO][5378] ipam/ipam.go 489: Trying affinity for 192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.838 [INFO][5378] ipam/ipam.go 155: Attempting to load block cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.839 [INFO][5378] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.128/26 host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.840 [INFO][5378] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.128/26 handle="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.841 [INFO][5378] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3 May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.843 [INFO][5378] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.97.128/26 handle="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.847 [INFO][5378] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.97.134/26] block=192.168.97.128/26 handle="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.847 [INFO][5378] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.134/26] handle="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" host="ci-4284.0.0-n-b3bb28caaa" May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.847 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:00:05.854003 containerd[1806]: 2025-05-14 00:00:05.847 [INFO][5378] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.97.134/26] IPv6=[] ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" HandleID="k8s-pod-network.1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Workload="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" May 14 00:00:05.854414 containerd[1806]: 2025-05-14 00:00:05.847 [INFO][5326] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"87368ed2-27ae-4001-a327-916530deb705", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"", Pod:"coredns-7db6d8ff4d-qzcw7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10dbbb4462c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.854414 containerd[1806]: 2025-05-14 00:00:05.848 [INFO][5326] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.97.134/32] ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" May 14 00:00:05.854414 containerd[1806]: 2025-05-14 00:00:05.848 [INFO][5326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10dbbb4462c ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" May 14 00:00:05.854414 containerd[1806]: 2025-05-14 00:00:05.848 [INFO][5326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" May 14 00:00:05.854414 containerd[1806]: 2025-05-14 00:00:05.848 [INFO][5326] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"87368ed2-27ae-4001-a327-916530deb705", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-b3bb28caaa", ContainerID:"1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3", Pod:"coredns-7db6d8ff4d-qzcw7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10dbbb4462c", MAC:"2e:ca:2a:c2:fe:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:00:05.854414 containerd[1806]: 2025-05-14 00:00:05.853 [INFO][5326] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzcw7" WorkloadEndpoint="ci--4284.0.0--n--b3bb28caaa-k8s-coredns--7db6d8ff4d--qzcw7-eth0" May 14 00:00:05.862063 systemd[1]: Started cri-containerd-c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e.scope - libcontainer container c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e. May 14 00:00:05.862732 containerd[1806]: time="2025-05-14T00:00:05.862709269Z" level=info msg="connecting to shim 1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3" address="unix:///run/containerd/s/1306e41e90019a4c992ed0e177e09add351549fed473924b732108218c4eb610" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:05.870609 systemd[1]: Started cri-containerd-1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3.scope - libcontainer container 1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3. May 14 00:00:05.888862 containerd[1806]: time="2025-05-14T00:00:05.888841505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5789dbfcdc-p49jv,Uid:cd8d5765-d693-4561-9e05-07c50e8eabdf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e\"" May 14 00:00:05.895695 containerd[1806]: time="2025-05-14T00:00:05.895666566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzcw7,Uid:87368ed2-27ae-4001-a327-916530deb705,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3\"" May 14 00:00:05.896871 containerd[1806]: time="2025-05-14T00:00:05.896856961Z" level=info msg="CreateContainer within sandbox \"1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:05.900053 containerd[1806]: time="2025-05-14T00:00:05.900005338Z" level=info msg="Container 19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:05.902110 containerd[1806]: time="2025-05-14T00:00:05.902069627Z" level=info msg="CreateContainer within sandbox \"1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047\"" May 14 00:00:05.902329 containerd[1806]: time="2025-05-14T00:00:05.902289288Z" level=info msg="StartContainer for \"19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047\"" May 14 00:00:05.902718 containerd[1806]: time="2025-05-14T00:00:05.902675206Z" level=info msg="connecting to shim 19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047" address="unix:///run/containerd/s/1306e41e90019a4c992ed0e177e09add351549fed473924b732108218c4eb610" protocol=ttrpc version=3 May 14 00:00:05.919134 systemd[1]: Started cri-containerd-19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047.scope - libcontainer container 19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047. May 14 00:00:05.931941 containerd[1806]: time="2025-05-14T00:00:05.931878486Z" level=info msg="StartContainer for \"19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047\" returns successfully" May 14 00:00:06.935550 kubelet[3318]: I0514 00:00:06.935419 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qzcw7" podStartSLOduration=28.935380551 podStartE2EDuration="28.935380551s" podCreationTimestamp="2025-05-13 23:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:06.934322807 +0000 UTC m=+44.205068037" watchObservedRunningTime="2025-05-14 00:00:06.935380551 +0000 UTC m=+44.206125716" May 14 00:00:07.128419 systemd-networkd[1727]: cali61560d150eb: Gained IPv6LL May 14 00:00:07.448258 systemd-networkd[1727]: cali10dbbb4462c: Gained IPv6LL May 14 00:00:12.024796 containerd[1806]: time="2025-05-14T00:00:12.024769888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.025108 containerd[1806]: time="2025-05-14T00:00:12.024942035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 00:00:12.025263 containerd[1806]: time="2025-05-14T00:00:12.025246100Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.026151 containerd[1806]: time="2025-05-14T00:00:12.026137821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.026556 containerd[1806]: time="2025-05-14T00:00:12.026538026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 9.148049023s" May 14 00:00:12.026589 containerd[1806]: time="2025-05-14T00:00:12.026560487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 00:00:12.027511 containerd[1806]: time="2025-05-14T00:00:12.027499886Z" level=info msg="CreateContainer within sandbox \"5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 00:00:12.027627 containerd[1806]: time="2025-05-14T00:00:12.027616958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 00:00:12.031198 containerd[1806]: time="2025-05-14T00:00:12.031158154Z" level=info msg="Container f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:12.034276 containerd[1806]: time="2025-05-14T00:00:12.034235523Z" level=info msg="CreateContainer within sandbox \"5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740\"" May 14 00:00:12.034499 containerd[1806]: time="2025-05-14T00:00:12.034457627Z" level=info msg="StartContainer for \"f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740\"" May 14 00:00:12.035230 containerd[1806]: time="2025-05-14T00:00:12.035191254Z" level=info msg="connecting to shim f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740" address="unix:///run/containerd/s/3517730195673814e27ca23788bb31582e8c52ce4776be0ec989c08a931b1f8b" protocol=ttrpc version=3 May 14 00:00:12.054069 systemd[1]: Started cri-containerd-f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740.scope - libcontainer container f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740. May 14 00:00:12.073358 containerd[1806]: time="2025-05-14T00:00:12.073336375Z" level=info msg="StartContainer for \"f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740\" returns successfully" May 14 00:00:19.758437 containerd[1806]: time="2025-05-14T00:00:19.758355083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:19.758695 containerd[1806]: time="2025-05-14T00:00:19.758590687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 00:00:19.758974 containerd[1806]: time="2025-05-14T00:00:19.758948423Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:19.759769 containerd[1806]: time="2025-05-14T00:00:19.759723474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:19.760191 containerd[1806]: time="2025-05-14T00:00:19.760148340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 7.73251586s" May 14 00:00:19.760191 containerd[1806]: time="2025-05-14T00:00:19.760166919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 00:00:19.760627 containerd[1806]: time="2025-05-14T00:00:19.760586343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:00:19.763467 containerd[1806]: time="2025-05-14T00:00:19.763449308Z" level=info msg="CreateContainer within sandbox \"21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 00:00:19.766264 containerd[1806]: time="2025-05-14T00:00:19.766248757Z" level=info msg="Container a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:19.769046 containerd[1806]: time="2025-05-14T00:00:19.768992640Z" level=info msg="CreateContainer within sandbox \"21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\"" May 14 00:00:19.769242 containerd[1806]: time="2025-05-14T00:00:19.769194495Z" level=info msg="StartContainer for \"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\"" May 14 00:00:19.769752 containerd[1806]: time="2025-05-14T00:00:19.769705340Z" level=info msg="connecting to shim a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076" address="unix:///run/containerd/s/4810604a308b6b6b631362c99c6c19b69855fce3c87382f094df422e528ba32c" protocol=ttrpc version=3 May 14 00:00:19.783224 systemd[1]: Started cri-containerd-a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076.scope - libcontainer container a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076. May 14 00:00:19.810710 containerd[1806]: time="2025-05-14T00:00:19.810690701Z" level=info msg="StartContainer for \"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" returns successfully" May 14 00:00:19.974661 kubelet[3318]: I0514 00:00:19.974602 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57dd79b879-2b2bb" podStartSLOduration=20.113593316 podStartE2EDuration="36.974582823s" podCreationTimestamp="2025-05-13 23:59:43 +0000 UTC" firstStartedPulling="2025-05-14 00:00:02.899537926 +0000 UTC m=+40.170283041" lastFinishedPulling="2025-05-14 00:00:19.760527437 +0000 UTC m=+57.031272548" observedRunningTime="2025-05-14 00:00:19.974296758 +0000 UTC m=+57.245041899" watchObservedRunningTime="2025-05-14 00:00:19.974582823 +0000 UTC m=+57.245327940" May 14 00:00:20.009381 containerd[1806]: time="2025-05-14T00:00:20.009305499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"00ecb39dbd9a49331a6cc8bb731c2ec041e639739aab2f97a1d4861e4650dcb7\" pid:5692 exited_at:{seconds:1747180820 nanos:9125641}" May 14 00:00:27.540286 containerd[1806]: time="2025-05-14T00:00:27.540260407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:27.540599 containerd[1806]: time="2025-05-14T00:00:27.540485281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 00:00:27.540751 containerd[1806]: time="2025-05-14T00:00:27.540738285Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:27.541679 containerd[1806]: time="2025-05-14T00:00:27.541665293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:27.542060 containerd[1806]: time="2025-05-14T00:00:27.542046493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 7.781445216s" May 14 00:00:27.542104 containerd[1806]: time="2025-05-14T00:00:27.542062596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 00:00:27.542567 containerd[1806]: time="2025-05-14T00:00:27.542556012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:00:27.543105 containerd[1806]: time="2025-05-14T00:00:27.543093646Z" level=info msg="CreateContainer within sandbox \"f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:00:27.545613 containerd[1806]: time="2025-05-14T00:00:27.545575746Z" level=info msg="Container dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:27.548463 containerd[1806]: time="2025-05-14T00:00:27.548417517Z" level=info msg="CreateContainer within sandbox \"f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390\"" May 14 00:00:27.548632 containerd[1806]: time="2025-05-14T00:00:27.548621167Z" level=info msg="StartContainer for \"dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390\"" May 14 00:00:27.549148 containerd[1806]: time="2025-05-14T00:00:27.549135390Z" level=info msg="connecting to shim dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390" address="unix:///run/containerd/s/29e52c90b688f5ba6ba0483854f8bee3ed1ac7eb8818e0a546a1ee5f1c96aaf8" protocol=ttrpc version=3 May 14 00:00:27.574197 systemd[1]: Started cri-containerd-dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390.scope - libcontainer container dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390. May 14 00:00:27.619119 containerd[1806]: time="2025-05-14T00:00:27.619053911Z" level=info msg="StartContainer for \"dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390\" returns successfully" May 14 00:00:27.997833 kubelet[3318]: I0514 00:00:27.997798 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5789dbfcdc-4lt4z" podStartSLOduration=21.347838361 podStartE2EDuration="44.997785815s" podCreationTimestamp="2025-05-13 23:59:43 +0000 UTC" firstStartedPulling="2025-05-14 00:00:03.892555414 +0000 UTC m=+41.163300524" lastFinishedPulling="2025-05-14 00:00:27.542502868 +0000 UTC m=+64.813247978" observedRunningTime="2025-05-14 00:00:27.997454207 +0000 UTC m=+65.268199327" watchObservedRunningTime="2025-05-14 00:00:27.997785815 +0000 UTC m=+65.268530923" May 14 00:00:28.993825 kubelet[3318]: I0514 00:00:28.993731 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:00:29.891658 containerd[1806]: time="2025-05-14T00:00:29.891632982Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:29.891920 containerd[1806]: time="2025-05-14T00:00:29.891805889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 00:00:29.893256 containerd[1806]: time="2025-05-14T00:00:29.893226803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.350642872s" May 14 00:00:29.893331 containerd[1806]: time="2025-05-14T00:00:29.893259952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 00:00:29.893908 containerd[1806]: time="2025-05-14T00:00:29.893897136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 00:00:29.894674 containerd[1806]: time="2025-05-14T00:00:29.894617619Z" level=info msg="CreateContainer within sandbox \"c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:00:29.897727 containerd[1806]: time="2025-05-14T00:00:29.897690739Z" level=info msg="Container d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:29.900863 containerd[1806]: time="2025-05-14T00:00:29.900824683Z" level=info msg="CreateContainer within sandbox \"c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27\"" May 14 00:00:29.901129 containerd[1806]: time="2025-05-14T00:00:29.901102658Z" level=info msg="StartContainer for \"d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27\"" May 14 00:00:29.901672 containerd[1806]: time="2025-05-14T00:00:29.901659167Z" level=info msg="connecting to shim d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27" address="unix:///run/containerd/s/486ab5a09d31386364e4d07b0f22cdc0bc806e1dde6a0e92afe809ba22aec1a7" protocol=ttrpc version=3 May 14 00:00:29.919098 systemd[1]: Started cri-containerd-d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27.scope - libcontainer container d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27. May 14 00:00:29.952853 containerd[1806]: time="2025-05-14T00:00:29.952825322Z" level=info msg="StartContainer for \"d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27\" returns successfully" May 14 00:00:30.171396 containerd[1806]: time="2025-05-14T00:00:30.171378284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"9d9d3d5a72540f6322a643397a7e71ad6750cf6d9a177b2761b289e4470cf101\" pid:5825 exited_at:{seconds:1747180830 nanos:171082967}" May 14 00:00:30.998731 kubelet[3318]: I0514 00:00:30.998615 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:00:32.620146 containerd[1806]: time="2025-05-14T00:00:32.620092783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:32.620385 containerd[1806]: time="2025-05-14T00:00:32.620244436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 00:00:32.620691 containerd[1806]: time="2025-05-14T00:00:32.620645932Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:32.621498 containerd[1806]: time="2025-05-14T00:00:32.621486599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:32.621996 containerd[1806]: time="2025-05-14T00:00:32.621865211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.727951418s" May 14 00:00:32.622043 containerd[1806]: time="2025-05-14T00:00:32.622003155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 00:00:32.624222 containerd[1806]: time="2025-05-14T00:00:32.624200432Z" level=info msg="CreateContainer within sandbox \"5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 00:00:32.627045 containerd[1806]: time="2025-05-14T00:00:32.627032308Z" level=info msg="Container 1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:32.630723 containerd[1806]: time="2025-05-14T00:00:32.630675798Z" level=info msg="CreateContainer within sandbox \"5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38\"" May 14 00:00:32.630918 containerd[1806]: time="2025-05-14T00:00:32.630905757Z" level=info msg="StartContainer for \"1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38\"" May 14 00:00:32.631692 containerd[1806]: time="2025-05-14T00:00:32.631649124Z" level=info msg="connecting to shim 1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38" address="unix:///run/containerd/s/3517730195673814e27ca23788bb31582e8c52ce4776be0ec989c08a931b1f8b" protocol=ttrpc version=3 May 14 00:00:32.653205 systemd[1]: Started cri-containerd-1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38.scope - libcontainer container 1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38. May 14 00:00:32.672138 containerd[1806]: time="2025-05-14T00:00:32.672082398Z" level=info msg="StartContainer for \"1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38\" returns successfully" May 14 00:00:32.827471 kubelet[3318]: I0514 00:00:32.827403 3318 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 00:00:32.828547 kubelet[3318]: I0514 00:00:32.827503 3318 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 00:00:33.035592 kubelet[3318]: I0514 00:00:33.035472 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2pp6m" podStartSLOduration=20.291159644 podStartE2EDuration="50.035429896s" podCreationTimestamp="2025-05-13 23:59:43 +0000 UTC" firstStartedPulling="2025-05-14 00:00:02.878344768 +0000 UTC m=+40.149089879" lastFinishedPulling="2025-05-14 00:00:32.62261502 +0000 UTC m=+69.893360131" observedRunningTime="2025-05-14 00:00:33.035369605 +0000 UTC m=+70.306114863" watchObservedRunningTime="2025-05-14 00:00:33.035429896 +0000 UTC m=+70.306175061" May 14 00:00:33.036553 kubelet[3318]: I0514 00:00:33.036467 3318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5789dbfcdc-p49jv" podStartSLOduration=26.032045275 podStartE2EDuration="50.036448543s" podCreationTimestamp="2025-05-13 23:59:43 +0000 UTC" firstStartedPulling="2025-05-14 00:00:05.889439767 +0000 UTC m=+43.160184877" lastFinishedPulling="2025-05-14 00:00:29.893843035 +0000 UTC m=+67.164588145" observedRunningTime="2025-05-14 00:00:30.001799159 +0000 UTC m=+67.272544283" watchObservedRunningTime="2025-05-14 00:00:33.036448543 +0000 UTC m=+70.307193707" May 14 00:00:36.577511 kubelet[3318]: I0514 00:00:36.577403 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:00:38.655003 kubelet[3318]: I0514 00:00:38.654877 3318 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:00:43.477670 containerd[1806]: time="2025-05-14T00:00:43.477637648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"a42b53f23468b873581a024c5a88b6575547eaf0e2c0f3dfc09fe411b75abda7\" pid:5911 exited_at:{seconds:1747180843 nanos:477401202}" May 14 00:00:55.287418 containerd[1806]: time="2025-05-14T00:00:55.287379097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"4bc91b0326b351b2c892a030a2d277c4c5c2e699e18b65643e55cb8e51da7d2d\" pid:5934 exited_at:{seconds:1747180855 nanos:287148300}" May 14 00:01:00.193521 containerd[1806]: time="2025-05-14T00:01:00.193464825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"dcc3be1ba0876d269742447fb4d5319aa38d23660e97a5732cf48c0242d4527c\" pid:5957 exited_at:{seconds:1747180860 nanos:193225521}" May 14 00:01:13.477111 containerd[1806]: time="2025-05-14T00:01:13.477088148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"321289636eba2a3e3df96e38a0e6c1a5ddf7dd97b097cf257bfd41f2ac38ace2\" pid:5989 exited_at:{seconds:1747180873 nanos:476944873}" May 14 00:01:30.184135 containerd[1806]: time="2025-05-14T00:01:30.184105393Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"41f11b8a894732cead5a87e718f02c1ee0e2228006d45f42c8dfe73d7d1194f9\" pid:6026 exited_at:{seconds:1747180890 nanos:183906656}" May 14 00:01:43.482188 containerd[1806]: time="2025-05-14T00:01:43.482159909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"aa19a8786d40de914010c00a6e0140efc3570c80debf5946545dd5dd2467408e\" pid:6072 exited_at:{seconds:1747180903 nanos:482007927}" May 14 00:01:55.287276 containerd[1806]: time="2025-05-14T00:01:55.287200636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"88c4ccb2b5c63512d8c37567872ef0714e592181f069f5e94fb4dd47c0758c22\" pid:6094 exited_at:{seconds:1747180915 nanos:287017150}" May 14 00:02:00.201315 containerd[1806]: time="2025-05-14T00:02:00.201209814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"e1f7d94f8c58e3fe98c48f9eb759eaab8e37e5d5eb8191b156c94055b3f2d395\" pid:6115 exited_at:{seconds:1747180920 nanos:200901809}" May 14 00:02:13.485019 containerd[1806]: time="2025-05-14T00:02:13.484992175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"03ee716c6e10aa056b6f8efde45b91fd5ed5cf59d91e65f590c3d18fd167fa23\" pid:6147 exited_at:{seconds:1747180933 nanos:484875192}" May 14 00:02:30.195394 containerd[1806]: time="2025-05-14T00:02:30.195362881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"f8d5e82a786bf4baac333361ef8a8783c5bf26b42b41e4559a5add3f83854432\" pid:6171 exited_at:{seconds:1747180950 nanos:195162412}" May 14 00:02:43.476890 containerd[1806]: time="2025-05-14T00:02:43.476835169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"38a3f3f1ec8ffc6a376a0104f53ebbb0d47f6355750030fd25d115e117d74caa\" pid:6209 exited_at:{seconds:1747180963 nanos:476703837}" May 14 00:02:55.274309 containerd[1806]: time="2025-05-14T00:02:55.274287297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"9c3337eca363f7e03742593e8ab888dbb77d9124f33d9ae099e9a43f654042c5\" pid:6234 exited_at:{seconds:1747180975 nanos:274182137}" May 14 00:03:00.187149 containerd[1806]: time="2025-05-14T00:03:00.187074723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"353401d2c538536b7aa6000e31e2a9edb55a4d16e7abd7edcf7fbc70a2db27ba\" pid:6255 exited_at:{seconds:1747180980 nanos:186849633}" May 14 00:03:13.534134 containerd[1806]: time="2025-05-14T00:03:13.534101636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"dd354073dc6848e4bebdffd38a57119929988f3835def0e720a0b6b630bfd5ea\" pid:6306 exited_at:{seconds:1747180993 nanos:533941060}" May 14 00:03:30.191766 containerd[1806]: time="2025-05-14T00:03:30.191700348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"8d3c37f8688d85ddb654ff740f1d29cd22cefcc7e86eae37663b09d7f225207d\" pid:6330 exited_at:{seconds:1747181010 nanos:191460361}" May 14 00:03:43.517754 containerd[1806]: time="2025-05-14T00:03:43.517678168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"80f9c13ddc0a831114582145f23d9c3a6e16dfb50ae6ee9d36bc689116b43ac5\" pid:6360 exited_at:{seconds:1747181023 nanos:517482094}" May 14 00:03:55.282830 containerd[1806]: time="2025-05-14T00:03:55.282801576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"973609c6e1ad0a7a458afdb5074b2f7ebc9d4c6ca4087040c2ffa52dff9d9a8b\" pid:6383 exited_at:{seconds:1747181035 nanos:282638031}" May 14 00:04:00.198176 containerd[1806]: time="2025-05-14T00:04:00.198110234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"b75df5c1184001f8587d5c986ef4f5d9ecc7d49770eb5941859c4d682358e745\" pid:6406 exited_at:{seconds:1747181040 nanos:197876624}" May 14 00:04:13.487050 containerd[1806]: time="2025-05-14T00:04:13.486974715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"feeeafaae68af432fd06b545644c2d4c188a39ff45f8822c7cbde68f6a81e6ca\" pid:6441 exited_at:{seconds:1747181053 nanos:486818063}" May 14 00:04:19.287228 containerd[1806]: time="2025-05-14T00:04:19.287020658Z" level=warning msg="container event discarded" container=0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326 type=CONTAINER_CREATED_EVENT May 14 00:04:19.287228 containerd[1806]: time="2025-05-14T00:04:19.287173019Z" level=warning msg="container event discarded" container=0be437cb8772b23344aa3595c9beae2270c195a9b66aee4cead6b2f184577326 type=CONTAINER_STARTED_EVENT May 14 00:04:19.298735 containerd[1806]: time="2025-05-14T00:04:19.298573054Z" level=warning msg="container event discarded" container=2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c type=CONTAINER_CREATED_EVENT May 14 00:04:19.298735 containerd[1806]: time="2025-05-14T00:04:19.298681827Z" level=warning msg="container event discarded" container=2bbba07ab6b2cb89b399202ec8f423ada0d9baa03f7cffecab802920b9b2fd8c type=CONTAINER_STARTED_EVENT May 14 00:04:19.298735 containerd[1806]: time="2025-05-14T00:04:19.298711917Z" level=warning msg="container event discarded" container=fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b type=CONTAINER_CREATED_EVENT May 14 00:04:19.298735 containerd[1806]: time="2025-05-14T00:04:19.298737481Z" level=warning msg="container event discarded" container=fc960b018ebd891e7a4d16a6f49ee173f0a5dd7e6c62d8ce70794cfdbbecfe1b type=CONTAINER_STARTED_EVENT May 14 00:04:19.299346 containerd[1806]: time="2025-05-14T00:04:19.298758476Z" level=warning msg="container event discarded" container=558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341 type=CONTAINER_CREATED_EVENT May 14 00:04:19.299346 containerd[1806]: time="2025-05-14T00:04:19.298781299Z" level=warning msg="container event discarded" container=f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010 type=CONTAINER_CREATED_EVENT May 14 00:04:19.310259 containerd[1806]: time="2025-05-14T00:04:19.310122125Z" level=warning msg="container event discarded" container=d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19 type=CONTAINER_CREATED_EVENT May 14 00:04:19.355652 containerd[1806]: time="2025-05-14T00:04:19.355542544Z" level=warning msg="container event discarded" container=558bec2cf459d3e0cfba5a861586345f526a96c430e384e3d02248f5a6b75341 type=CONTAINER_STARTED_EVENT May 14 00:04:19.355652 containerd[1806]: time="2025-05-14T00:04:19.355632384Z" level=warning msg="container event discarded" container=f8d99732ae45541dc00775b3a459ada9c837824ac8c74c20add05e28d06b4010 type=CONTAINER_STARTED_EVENT May 14 00:04:19.356081 containerd[1806]: time="2025-05-14T00:04:19.355669927Z" level=warning msg="container event discarded" container=d17502abca66408cd68451d0bc315d6e4f8f6e4a500e54dcfbfce11715917f19 type=CONTAINER_STARTED_EVENT May 14 00:04:30.232146 containerd[1806]: time="2025-05-14T00:04:30.232109182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"08d3426cdedbffe2d373a6960d41c279a178849cfda9c5c6142c26c7d01a2b5d\" pid:6465 exited_at:{seconds:1747181070 nanos:231846279}" May 14 00:04:38.364248 containerd[1806]: time="2025-05-14T00:04:38.364059439Z" level=warning msg="container event discarded" container=84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87 type=CONTAINER_CREATED_EVENT May 14 00:04:38.364248 containerd[1806]: time="2025-05-14T00:04:38.364190830Z" level=warning msg="container event discarded" container=84643cd6b59b865a05ee01ba5a5ed41aa7e8ed34e44afd84aa9fff1a3ca53e87 type=CONTAINER_STARTED_EVENT May 14 00:04:38.364248 containerd[1806]: time="2025-05-14T00:04:38.364219983Z" level=warning msg="container event discarded" container=080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc type=CONTAINER_CREATED_EVENT May 14 00:04:38.457704 containerd[1806]: time="2025-05-14T00:04:38.457562802Z" level=warning msg="container event discarded" container=080f2c4ba27e514062ec1a660e833cd9c44ae547a3f1626a4655b24aed8436cc type=CONTAINER_STARTED_EVENT May 14 00:04:38.541265 containerd[1806]: time="2025-05-14T00:04:38.541082339Z" level=warning msg="container event discarded" container=c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4 type=CONTAINER_CREATED_EVENT May 14 00:04:38.541265 containerd[1806]: time="2025-05-14T00:04:38.541197088Z" level=warning msg="container event discarded" container=c15cea105ae4f2861a4491c7118e8b954a8f74b0bfc3f4b0f2fd29524f16d6a4 type=CONTAINER_STARTED_EVENT May 14 00:04:40.531536 containerd[1806]: time="2025-05-14T00:04:40.531378731Z" level=warning msg="container event discarded" container=ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660 type=CONTAINER_CREATED_EVENT May 14 00:04:40.576966 containerd[1806]: time="2025-05-14T00:04:40.576816464Z" level=warning msg="container event discarded" container=ab20acf7fac6ad59c3cc3f23a58c3ba691a15da585508f0103b1551bfffa2660 type=CONTAINER_STARTED_EVENT May 14 00:04:43.479982 containerd[1806]: time="2025-05-14T00:04:43.479959421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"43fc906cc639dba2b481ed22de8a0af02263287d75b801b555f074799e5a5a51\" pid:6502 exited_at:{seconds:1747181083 nanos:479843001}" May 14 00:04:43.741203 containerd[1806]: time="2025-05-14T00:04:43.740881460Z" level=warning msg="container event discarded" container=ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8 type=CONTAINER_CREATED_EVENT May 14 00:04:43.741203 containerd[1806]: time="2025-05-14T00:04:43.741047257Z" level=warning msg="container event discarded" container=ed90bc2507085a6c98ff24a6cbd94cebf8bb2bc18c59c4dfc533e7855fa565f8 type=CONTAINER_STARTED_EVENT May 14 00:04:43.741203 containerd[1806]: time="2025-05-14T00:04:43.741077839Z" level=warning msg="container event discarded" container=5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10 type=CONTAINER_CREATED_EVENT May 14 00:04:43.741203 containerd[1806]: time="2025-05-14T00:04:43.741109079Z" level=warning msg="container event discarded" container=5bf148f7b2ecfe774346457cf66ed763a4247a31029dd0688530d51d4e172c10 type=CONTAINER_STARTED_EVENT May 14 00:04:45.183964 containerd[1806]: time="2025-05-14T00:04:45.183882224Z" level=warning msg="container event discarded" container=2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256 type=CONTAINER_CREATED_EVENT May 14 00:04:45.226269 containerd[1806]: time="2025-05-14T00:04:45.226148867Z" level=warning msg="container event discarded" container=2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256 type=CONTAINER_STARTED_EVENT May 14 00:04:45.452994 containerd[1806]: time="2025-05-14T00:04:45.452685447Z" level=warning msg="container event discarded" container=2cd34599b5705c2a871302f618b54262a2afd4e7b8e6288cf199fd550050d256 type=CONTAINER_STOPPED_EVENT May 14 00:04:46.890022 containerd[1806]: time="2025-05-14T00:04:46.889871175Z" level=warning msg="container event discarded" container=37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522 type=CONTAINER_CREATED_EVENT May 14 00:04:46.937411 containerd[1806]: time="2025-05-14T00:04:46.937290067Z" level=warning msg="container event discarded" container=37b9c47333a38635c0a94878b5723ac85e0b83f300e68abff4ca88010cb38522 type=CONTAINER_STARTED_EVENT May 14 00:04:49.425899 containerd[1806]: time="2025-05-14T00:04:49.425745275Z" level=warning msg="container event discarded" container=7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e type=CONTAINER_CREATED_EVENT May 14 00:04:49.472357 containerd[1806]: time="2025-05-14T00:04:49.472204477Z" level=warning msg="container event discarded" container=7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e type=CONTAINER_STARTED_EVENT May 14 00:04:50.694348 containerd[1806]: time="2025-05-14T00:04:50.694225957Z" level=warning msg="container event discarded" container=7daafa709fd9192128cd6e03370c7ebf3ca3b10498d3463ed93ba0de4b24176e type=CONTAINER_STOPPED_EVENT May 14 00:04:54.029030 containerd[1806]: time="2025-05-14T00:04:54.028889331Z" level=warning msg="container event discarded" container=b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d type=CONTAINER_CREATED_EVENT May 14 00:04:54.066631 containerd[1806]: time="2025-05-14T00:04:54.066486722Z" level=warning msg="container event discarded" container=b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d type=CONTAINER_STARTED_EVENT May 14 00:04:55.295004 containerd[1806]: time="2025-05-14T00:04:55.294980367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"a492f55e5e3d3f623fd146a74a725e74ea4236df322cc9781978c7b408ffbf2e\" pid:6538 exited_at:{seconds:1747181095 nanos:294871052}" May 14 00:05:00.185438 containerd[1806]: time="2025-05-14T00:05:00.185413628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"e297d1c986d5fb0e8e57a95c872455652e4e4687caa1b8d0e65557f3673de472\" pid:6560 exited_at:{seconds:1747181100 nanos:185221453}" May 14 00:05:01.922555 containerd[1806]: time="2025-05-14T00:05:01.922360668Z" level=warning msg="container event discarded" container=b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181 type=CONTAINER_CREATED_EVENT May 14 00:05:01.922555 containerd[1806]: time="2025-05-14T00:05:01.922509364Z" level=warning msg="container event discarded" container=b784ee0d034bdc6dbfae6ca43d41a0993a80bb2275c37f3c6e76723b4a6e7181 type=CONTAINER_STARTED_EVENT May 14 00:05:01.922555 containerd[1806]: time="2025-05-14T00:05:01.922542411Z" level=warning msg="container event discarded" container=dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3 type=CONTAINER_CREATED_EVENT May 14 00:05:01.961183 containerd[1806]: time="2025-05-14T00:05:01.961009133Z" level=warning msg="container event discarded" container=dc2f74c40fc94616568622967a8836e73235fe2c26f98327a86931edf3377fc3 type=CONTAINER_STARTED_EVENT May 14 00:05:02.888513 containerd[1806]: time="2025-05-14T00:05:02.888476286Z" level=warning msg="container event discarded" container=5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02 type=CONTAINER_CREATED_EVENT May 14 00:05:02.888513 containerd[1806]: time="2025-05-14T00:05:02.888502986Z" level=warning msg="container event discarded" container=5e778c73a46b75fbffd8ca9497d8b11cc85d654e196e45017e257a65424bfc02 type=CONTAINER_STARTED_EVENT May 14 00:05:02.909919 containerd[1806]: time="2025-05-14T00:05:02.909848612Z" level=warning msg="container event discarded" container=21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249 type=CONTAINER_CREATED_EVENT May 14 00:05:02.909919 containerd[1806]: time="2025-05-14T00:05:02.909909059Z" level=warning msg="container event discarded" container=21bc5aa8792629ffdead3aa3301adc2f8edce72e1c084446e59e4e5081613249 type=CONTAINER_STARTED_EVENT May 14 00:05:03.902332 containerd[1806]: time="2025-05-14T00:05:03.902207336Z" level=warning msg="container event discarded" container=f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45 type=CONTAINER_CREATED_EVENT May 14 00:05:03.902332 containerd[1806]: time="2025-05-14T00:05:03.902310449Z" level=warning msg="container event discarded" container=f223167ec3e2f878e769f76933f42f7133a9a146dfffb1711580aceb6afbca45 type=CONTAINER_STARTED_EVENT May 14 00:05:05.899642 containerd[1806]: time="2025-05-14T00:05:05.899474185Z" level=warning msg="container event discarded" container=c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e type=CONTAINER_CREATED_EVENT May 14 00:05:05.899642 containerd[1806]: time="2025-05-14T00:05:05.899571466Z" level=warning msg="container event discarded" container=c0a1175f3d4bc7f3a84fe4bf61de079d44f28760812d854c8405828fd4936f2e type=CONTAINER_STARTED_EVENT May 14 00:05:05.899642 containerd[1806]: time="2025-05-14T00:05:05.899605042Z" level=warning msg="container event discarded" container=1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3 type=CONTAINER_CREATED_EVENT May 14 00:05:05.899642 containerd[1806]: time="2025-05-14T00:05:05.899635621Z" level=warning msg="container event discarded" container=1fc90ff70684bfc82f9a92bb764bbabcd71d863221ec351c104dc749d72104b3 type=CONTAINER_STARTED_EVENT May 14 00:05:05.912273 containerd[1806]: time="2025-05-14T00:05:05.912134438Z" level=warning msg="container event discarded" container=19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047 type=CONTAINER_CREATED_EVENT May 14 00:05:05.941764 containerd[1806]: time="2025-05-14T00:05:05.941578466Z" level=warning msg="container event discarded" container=19a7ade44df259e64a505f318de1a5768e849df082bba86ea9ebb28fefa42047 type=CONTAINER_STARTED_EVENT May 14 00:05:12.044353 containerd[1806]: time="2025-05-14T00:05:12.044287342Z" level=warning msg="container event discarded" container=f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740 type=CONTAINER_CREATED_EVENT May 14 00:05:12.083757 containerd[1806]: time="2025-05-14T00:05:12.083626751Z" level=warning msg="container event discarded" container=f71b9ff74702f8e829d216070c371eaf661fe5707606eef9d27b6ad2a9244740 type=CONTAINER_STARTED_EVENT May 14 00:05:13.497886 containerd[1806]: time="2025-05-14T00:05:13.497865434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"bee31a1cf036759e27e210f52ae0b3426eafe2aa090c88210170c6b83525861b\" pid:6591 exited_at:{seconds:1747181113 nanos:497656814}" May 14 00:05:19.779564 containerd[1806]: time="2025-05-14T00:05:19.779466748Z" level=warning msg="container event discarded" container=a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076 type=CONTAINER_CREATED_EVENT May 14 00:05:19.821241 containerd[1806]: time="2025-05-14T00:05:19.821089407Z" level=warning msg="container event discarded" container=a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076 type=CONTAINER_STARTED_EVENT May 14 00:05:27.558577 containerd[1806]: time="2025-05-14T00:05:27.558404863Z" level=warning msg="container event discarded" container=dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390 type=CONTAINER_CREATED_EVENT May 14 00:05:27.629125 containerd[1806]: time="2025-05-14T00:05:27.628961678Z" level=warning msg="container event discarded" container=dd185d8811c9d2dba2ce9770e7dfb6dce32c5b00d00aa0a3768fa63fef17b390 type=CONTAINER_STARTED_EVENT May 14 00:05:29.910814 containerd[1806]: time="2025-05-14T00:05:29.910668168Z" level=warning msg="container event discarded" container=d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27 type=CONTAINER_CREATED_EVENT May 14 00:05:29.963390 containerd[1806]: time="2025-05-14T00:05:29.963246088Z" level=warning msg="container event discarded" container=d23e9c4c324eae09cf6a5c5ad2e0dffd6d23505dc5889f5c17807404b37cde27 type=CONTAINER_STARTED_EVENT May 14 00:05:30.220844 containerd[1806]: time="2025-05-14T00:05:30.220817175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"8007c6ace500c47f0690814ba871ca14e989471e389f8f524bf0fb362b5283b0\" pid:6623 exited_at:{seconds:1747181130 nanos:220564297}" May 14 00:05:32.641091 containerd[1806]: time="2025-05-14T00:05:32.640910854Z" level=warning msg="container event discarded" container=1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38 type=CONTAINER_CREATED_EVENT May 14 00:05:32.682511 containerd[1806]: time="2025-05-14T00:05:32.682381289Z" level=warning msg="container event discarded" container=1e0f313b367b62c3ebcac2fd16e02aac769f12beeeb4767f61c9cd4fa5444f38 type=CONTAINER_STARTED_EVENT May 14 00:05:42.649042 systemd[1]: Started sshd@9-145.40.90.165:22-139.178.68.195:57960.service - OpenSSH per-connection server daemon (139.178.68.195:57960). May 14 00:05:42.746699 sshd[6650]: Accepted publickey for core from 139.178.68.195 port 57960 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:05:42.747486 sshd-session[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:42.750491 systemd-logind[1796]: New session 12 of user core. May 14 00:05:42.764064 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:05:42.853700 sshd[6652]: Connection closed by 139.178.68.195 port 57960 May 14 00:05:42.853865 sshd-session[6650]: pam_unix(sshd:session): session closed for user core May 14 00:05:42.855481 systemd[1]: sshd@9-145.40.90.165:22-139.178.68.195:57960.service: Deactivated successfully. May 14 00:05:42.856479 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:05:42.857221 systemd-logind[1796]: Session 12 logged out. Waiting for processes to exit. May 14 00:05:42.857789 systemd-logind[1796]: Removed session 12. May 14 00:05:43.517162 containerd[1806]: time="2025-05-14T00:05:43.517131112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"f260f19551723293827a2d55132ab0ee002776aac2544164413e8136022e98d0\" pid:6698 exited_at:{seconds:1747181143 nanos:516937443}" May 14 00:05:47.877216 systemd[1]: Started sshd@10-145.40.90.165:22-139.178.68.195:52032.service - OpenSSH per-connection server daemon (139.178.68.195:52032). May 14 00:05:47.941864 sshd[6709]: Accepted publickey for core from 139.178.68.195 port 52032 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:05:47.942591 sshd-session[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:47.945412 systemd-logind[1796]: New session 13 of user core. May 14 00:05:47.959116 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:05:48.048166 sshd[6711]: Connection closed by 139.178.68.195 port 52032 May 14 00:05:48.048333 sshd-session[6709]: pam_unix(sshd:session): session closed for user core May 14 00:05:48.049866 systemd[1]: sshd@10-145.40.90.165:22-139.178.68.195:52032.service: Deactivated successfully. May 14 00:05:48.050819 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:05:48.051469 systemd-logind[1796]: Session 13 logged out. Waiting for processes to exit. May 14 00:05:48.051921 systemd-logind[1796]: Removed session 13. May 14 00:05:53.080321 systemd[1]: Started sshd@11-145.40.90.165:22-139.178.68.195:52034.service - OpenSSH per-connection server daemon (139.178.68.195:52034). May 14 00:05:53.172525 sshd[6737]: Accepted publickey for core from 139.178.68.195 port 52034 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:05:53.173965 sshd-session[6737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:53.179423 systemd-logind[1796]: New session 14 of user core. May 14 00:05:53.199312 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:05:53.296447 sshd[6739]: Connection closed by 139.178.68.195 port 52034 May 14 00:05:53.296621 sshd-session[6737]: pam_unix(sshd:session): session closed for user core May 14 00:05:53.324122 systemd[1]: sshd@11-145.40.90.165:22-139.178.68.195:52034.service: Deactivated successfully. May 14 00:05:53.328336 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:05:53.331816 systemd-logind[1796]: Session 14 logged out. Waiting for processes to exit. May 14 00:05:53.335124 systemd[1]: Started sshd@12-145.40.90.165:22-139.178.68.195:52050.service - OpenSSH per-connection server daemon (139.178.68.195:52050). May 14 00:05:53.337786 systemd-logind[1796]: Removed session 14. May 14 00:05:53.427074 sshd[6763]: Accepted publickey for core from 139.178.68.195 port 52050 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:05:53.427681 sshd-session[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:53.430539 systemd-logind[1796]: New session 15 of user core. May 14 00:05:53.442229 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:05:53.568699 sshd[6767]: Connection closed by 139.178.68.195 port 52050 May 14 00:05:53.568885 sshd-session[6763]: pam_unix(sshd:session): session closed for user core May 14 00:05:53.584486 systemd[1]: sshd@12-145.40.90.165:22-139.178.68.195:52050.service: Deactivated successfully. May 14 00:05:53.585484 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:05:53.586309 systemd-logind[1796]: Session 15 logged out. Waiting for processes to exit. May 14 00:05:53.587026 systemd[1]: Started sshd@13-145.40.90.165:22-139.178.68.195:49966.service - OpenSSH per-connection server daemon (139.178.68.195:49966). May 14 00:05:53.587669 systemd-logind[1796]: Removed session 15. May 14 00:05:53.630781 sshd[6789]: Accepted publickey for core from 139.178.68.195 port 49966 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:05:53.631513 sshd-session[6789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:53.634550 systemd-logind[1796]: New session 16 of user core. May 14 00:05:53.657140 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:05:53.744997 sshd[6792]: Connection closed by 139.178.68.195 port 49966 May 14 00:05:53.745165 sshd-session[6789]: pam_unix(sshd:session): session closed for user core May 14 00:05:53.746819 systemd[1]: sshd@13-145.40.90.165:22-139.178.68.195:49966.service: Deactivated successfully. May 14 00:05:53.747842 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:05:53.748649 systemd-logind[1796]: Session 16 logged out. Waiting for processes to exit. May 14 00:05:53.749424 systemd-logind[1796]: Removed session 16. May 14 00:05:55.331887 containerd[1806]: time="2025-05-14T00:05:55.331858022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"bb8ef33a065afb42c074669fbea1aaace8e28c730a90c972b3465fac0dff2a94\" pid:6832 exited_at:{seconds:1747181155 nanos:331684071}" May 14 00:05:58.771400 systemd[1]: Started sshd@14-145.40.90.165:22-139.178.68.195:49978.service - OpenSSH per-connection server daemon (139.178.68.195:49978). May 14 00:05:58.825098 sshd[6843]: Accepted publickey for core from 139.178.68.195 port 49978 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:05:58.825675 sshd-session[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:05:58.828501 systemd-logind[1796]: New session 17 of user core. May 14 00:05:58.843068 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:05:58.933009 sshd[6845]: Connection closed by 139.178.68.195 port 49978 May 14 00:05:58.933197 sshd-session[6843]: pam_unix(sshd:session): session closed for user core May 14 00:05:58.934974 systemd[1]: sshd@14-145.40.90.165:22-139.178.68.195:49978.service: Deactivated successfully. May 14 00:05:58.936029 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:05:58.936830 systemd-logind[1796]: Session 17 logged out. Waiting for processes to exit. May 14 00:05:58.937623 systemd-logind[1796]: Removed session 17. May 14 00:06:00.246001 containerd[1806]: time="2025-05-14T00:06:00.245905116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"a450f0d633ddca4ab30ededfba096954618dfcbe2e7a0a95c5ed60822be48a13\" pid:6882 exited_at:{seconds:1747181160 nanos:245618748}" May 14 00:06:03.958322 systemd[1]: Started sshd@15-145.40.90.165:22-139.178.68.195:44988.service - OpenSSH per-connection server daemon (139.178.68.195:44988). May 14 00:06:03.993784 sshd[6900]: Accepted publickey for core from 139.178.68.195 port 44988 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:03.994420 sshd-session[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:03.997233 systemd-logind[1796]: New session 18 of user core. May 14 00:06:04.019192 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:06:04.108574 sshd[6902]: Connection closed by 139.178.68.195 port 44988 May 14 00:06:04.108746 sshd-session[6900]: pam_unix(sshd:session): session closed for user core May 14 00:06:04.110395 systemd[1]: sshd@15-145.40.90.165:22-139.178.68.195:44988.service: Deactivated successfully. May 14 00:06:04.111341 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:06:04.111998 systemd-logind[1796]: Session 18 logged out. Waiting for processes to exit. May 14 00:06:04.112676 systemd-logind[1796]: Removed session 18. May 14 00:06:09.135478 systemd[1]: Started sshd@16-145.40.90.165:22-139.178.68.195:45004.service - OpenSSH per-connection server daemon (139.178.68.195:45004). May 14 00:06:09.200656 sshd[6934]: Accepted publickey for core from 139.178.68.195 port 45004 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:09.201468 sshd-session[6934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:09.204772 systemd-logind[1796]: New session 19 of user core. May 14 00:06:09.229224 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:06:09.318175 sshd[6936]: Connection closed by 139.178.68.195 port 45004 May 14 00:06:09.318332 sshd-session[6934]: pam_unix(sshd:session): session closed for user core May 14 00:06:09.320252 systemd[1]: sshd@16-145.40.90.165:22-139.178.68.195:45004.service: Deactivated successfully. May 14 00:06:09.321164 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:06:09.321625 systemd-logind[1796]: Session 19 logged out. Waiting for processes to exit. May 14 00:06:09.322197 systemd-logind[1796]: Removed session 19. May 14 00:06:13.526266 containerd[1806]: time="2025-05-14T00:06:13.526233757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9d6a487e31f151f35d5805ae18fb5f04a00641328e373ee7618ba190a3b8076\" id:\"d301767270ccc40bd9bd293e785b013b2ddf27a32d341daf22c93b8c9aa0f30a\" pid:6973 exited_at:{seconds:1747181173 nanos:526041562}" May 14 00:06:14.336559 systemd[1]: Started sshd@17-145.40.90.165:22-139.178.68.195:48528.service - OpenSSH per-connection server daemon (139.178.68.195:48528). May 14 00:06:14.390978 sshd[6984]: Accepted publickey for core from 139.178.68.195 port 48528 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:14.391787 sshd-session[6984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:14.395150 systemd-logind[1796]: New session 20 of user core. May 14 00:06:14.411207 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:06:14.500955 sshd[6986]: Connection closed by 139.178.68.195 port 48528 May 14 00:06:14.501192 sshd-session[6984]: pam_unix(sshd:session): session closed for user core May 14 00:06:14.519299 systemd[1]: sshd@17-145.40.90.165:22-139.178.68.195:48528.service: Deactivated successfully. May 14 00:06:14.520240 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:06:14.520953 systemd-logind[1796]: Session 20 logged out. Waiting for processes to exit. May 14 00:06:14.521691 systemd[1]: Started sshd@18-145.40.90.165:22-139.178.68.195:48538.service - OpenSSH per-connection server daemon (139.178.68.195:48538). May 14 00:06:14.522249 systemd-logind[1796]: Removed session 20. May 14 00:06:14.567229 sshd[7010]: Accepted publickey for core from 139.178.68.195 port 48538 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:14.568071 sshd-session[7010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:14.571619 systemd-logind[1796]: New session 21 of user core. May 14 00:06:14.590101 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:06:14.854872 sshd[7014]: Connection closed by 139.178.68.195 port 48538 May 14 00:06:14.855057 sshd-session[7010]: pam_unix(sshd:session): session closed for user core May 14 00:06:14.868174 systemd[1]: sshd@18-145.40.90.165:22-139.178.68.195:48538.service: Deactivated successfully. May 14 00:06:14.868961 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:06:14.869671 systemd-logind[1796]: Session 21 logged out. Waiting for processes to exit. May 14 00:06:14.870374 systemd[1]: Started sshd@19-145.40.90.165:22-139.178.68.195:48554.service - OpenSSH per-connection server daemon (139.178.68.195:48554). May 14 00:06:14.870857 systemd-logind[1796]: Removed session 21. May 14 00:06:14.906591 sshd[7036]: Accepted publickey for core from 139.178.68.195 port 48554 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:14.907264 sshd-session[7036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:14.910274 systemd-logind[1796]: New session 22 of user core. May 14 00:06:14.924083 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:06:16.098745 sshd[7040]: Connection closed by 139.178.68.195 port 48554 May 14 00:06:16.100007 sshd-session[7036]: pam_unix(sshd:session): session closed for user core May 14 00:06:16.120300 systemd[1]: sshd@19-145.40.90.165:22-139.178.68.195:48554.service: Deactivated successfully. May 14 00:06:16.122121 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:06:16.122338 systemd[1]: session-22.scope: Consumed 444ms CPU time, 72M memory peak. May 14 00:06:16.123602 systemd-logind[1796]: Session 22 logged out. Waiting for processes to exit. May 14 00:06:16.124891 systemd[1]: Started sshd@20-145.40.90.165:22-139.178.68.195:48564.service - OpenSSH per-connection server daemon (139.178.68.195:48564). May 14 00:06:16.126020 systemd-logind[1796]: Removed session 22. May 14 00:06:16.181032 sshd[7069]: Accepted publickey for core from 139.178.68.195 port 48564 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:16.181681 sshd-session[7069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:16.184525 systemd-logind[1796]: New session 23 of user core. May 14 00:06:16.207142 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:06:16.414972 sshd[7074]: Connection closed by 139.178.68.195 port 48564 May 14 00:06:16.415225 sshd-session[7069]: pam_unix(sshd:session): session closed for user core May 14 00:06:16.436051 systemd[1]: sshd@20-145.40.90.165:22-139.178.68.195:48564.service: Deactivated successfully. May 14 00:06:16.440269 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:06:16.443532 systemd-logind[1796]: Session 23 logged out. Waiting for processes to exit. May 14 00:06:16.446813 systemd[1]: Started sshd@21-145.40.90.165:22-139.178.68.195:48574.service - OpenSSH per-connection server daemon (139.178.68.195:48574). May 14 00:06:16.449496 systemd-logind[1796]: Removed session 23. May 14 00:06:16.543761 sshd[7096]: Accepted publickey for core from 139.178.68.195 port 48574 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:16.544429 sshd-session[7096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:16.547439 systemd-logind[1796]: New session 24 of user core. May 14 00:06:16.559386 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:06:16.695694 sshd[7101]: Connection closed by 139.178.68.195 port 48574 May 14 00:06:16.696056 sshd-session[7096]: pam_unix(sshd:session): session closed for user core May 14 00:06:16.697630 systemd[1]: sshd@21-145.40.90.165:22-139.178.68.195:48574.service: Deactivated successfully. May 14 00:06:16.698578 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:06:16.699242 systemd-logind[1796]: Session 24 logged out. Waiting for processes to exit. May 14 00:06:16.699742 systemd-logind[1796]: Removed session 24. May 14 00:06:21.717243 systemd[1]: Started sshd@22-145.40.90.165:22-139.178.68.195:48586.service - OpenSSH per-connection server daemon (139.178.68.195:48586). May 14 00:06:21.752389 sshd[7131]: Accepted publickey for core from 139.178.68.195 port 48586 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:21.753082 sshd-session[7131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:21.755797 systemd-logind[1796]: New session 25 of user core. May 14 00:06:21.769409 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:06:21.864101 sshd[7133]: Connection closed by 139.178.68.195 port 48586 May 14 00:06:21.864272 sshd-session[7131]: pam_unix(sshd:session): session closed for user core May 14 00:06:21.865853 systemd[1]: sshd@22-145.40.90.165:22-139.178.68.195:48586.service: Deactivated successfully. May 14 00:06:21.866824 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:06:21.867467 systemd-logind[1796]: Session 25 logged out. Waiting for processes to exit. May 14 00:06:21.867967 systemd-logind[1796]: Removed session 25. May 14 00:06:26.896207 systemd[1]: Started sshd@23-145.40.90.165:22-139.178.68.195:36598.service - OpenSSH per-connection server daemon (139.178.68.195:36598). May 14 00:06:26.962140 sshd[7175]: Accepted publickey for core from 139.178.68.195 port 36598 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:26.962918 sshd-session[7175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:26.965970 systemd-logind[1796]: New session 26 of user core. May 14 00:06:26.979178 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:06:27.064845 sshd[7177]: Connection closed by 139.178.68.195 port 36598 May 14 00:06:27.065030 sshd-session[7175]: pam_unix(sshd:session): session closed for user core May 14 00:06:27.066777 systemd[1]: sshd@23-145.40.90.165:22-139.178.68.195:36598.service: Deactivated successfully. May 14 00:06:27.067772 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:06:27.068514 systemd-logind[1796]: Session 26 logged out. Waiting for processes to exit. May 14 00:06:27.069309 systemd-logind[1796]: Removed session 26. May 14 00:06:30.190688 containerd[1806]: time="2025-05-14T00:06:30.190609962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d0cf2bda9c3d438e774f81ddfb918a6c24c685f97c86902a763144a755368d\" id:\"b73e3572fa33bf5406ffeff82ee968d9842fe96c29576f914a73fd3e4bb9e8dc\" pid:7212 exited_at:{seconds:1747181190 nanos:190298058}" May 14 00:06:32.079184 systemd[1]: Started sshd@24-145.40.90.165:22-139.178.68.195:36602.service - OpenSSH per-connection server daemon (139.178.68.195:36602). May 14 00:06:32.153648 sshd[7230]: Accepted publickey for core from 139.178.68.195 port 36602 ssh2: RSA SHA256:lF8Scmb/9X6YhuUP1LXeMA2NPjE3qt9EXG087eSJ2EM May 14 00:06:32.154670 sshd-session[7230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:06:32.158654 systemd-logind[1796]: New session 27 of user core. May 14 00:06:32.174074 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:06:32.261571 sshd[7232]: Connection closed by 139.178.68.195 port 36602 May 14 00:06:32.261757 sshd-session[7230]: pam_unix(sshd:session): session closed for user core May 14 00:06:32.263837 systemd[1]: sshd@24-145.40.90.165:22-139.178.68.195:36602.service: Deactivated successfully. May 14 00:06:32.264814 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:06:32.265331 systemd-logind[1796]: Session 27 logged out. Waiting for processes to exit. May 14 00:06:32.265841 systemd-logind[1796]: Removed session 27.