Mar 25 02:31:05.495038 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 02:31:05.495053 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:31:05.495060 kernel: BIOS-provided physical RAM map: Mar 25 02:31:05.495065 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Mar 25 02:31:05.495068 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Mar 25 02:31:05.495072 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Mar 25 02:31:05.495077 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Mar 25 02:31:05.495081 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Mar 25 02:31:05.495085 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000062034fff] usable Mar 25 02:31:05.495089 kernel: BIOS-e820: [mem 0x0000000062035000-0x0000000062035fff] ACPI NVS Mar 25 02:31:05.495093 kernel: BIOS-e820: [mem 0x0000000062036000-0x0000000062036fff] reserved Mar 25 02:31:05.495098 kernel: BIOS-e820: [mem 0x0000000062037000-0x000000006c0c4fff] usable Mar 25 02:31:05.495103 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Mar 25 02:31:05.495107 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Mar 25 02:31:05.495112 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Mar 25 02:31:05.495117 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Mar 25 02:31:05.495122 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Mar 25 02:31:05.495127 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Mar 25 02:31:05.495131 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 25 02:31:05.495136 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Mar 25 02:31:05.495140 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Mar 25 02:31:05.495145 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Mar 25 02:31:05.495149 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Mar 25 02:31:05.495154 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Mar 25 02:31:05.495159 kernel: NX (Execute Disable) protection: active Mar 25 02:31:05.495163 kernel: APIC: Static calls initialized Mar 25 02:31:05.495168 kernel: SMBIOS 3.2.1 present. Mar 25 02:31:05.495172 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Mar 25 02:31:05.495178 kernel: tsc: Detected 3400.000 MHz processor Mar 25 02:31:05.495183 kernel: tsc: Detected 3399.906 MHz TSC Mar 25 02:31:05.495187 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 02:31:05.495193 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 02:31:05.495197 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Mar 25 02:31:05.495202 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Mar 25 02:31:05.495207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 02:31:05.495212 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Mar 25 02:31:05.495216 kernel: Using GB pages for direct mapping Mar 25 02:31:05.495222 kernel: ACPI: Early table checksum verification disabled Mar 25 02:31:05.495227 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Mar 25 02:31:05.495232 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Mar 25 02:31:05.495238 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Mar 25 02:31:05.495243 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Mar 25 02:31:05.495248 kernel: ACPI: FACS 0x000000006D762F80 000040 Mar 25 02:31:05.495254 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Mar 25 02:31:05.495259 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Mar 25 02:31:05.495264 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Mar 25 02:31:05.495269 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Mar 25 02:31:05.495274 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Mar 25 02:31:05.495279 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Mar 25 02:31:05.495284 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Mar 25 02:31:05.495289 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Mar 25 02:31:05.495295 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Mar 25 02:31:05.495300 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Mar 25 02:31:05.495305 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Mar 25 02:31:05.495310 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Mar 25 02:31:05.495315 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Mar 25 02:31:05.495320 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Mar 25 02:31:05.495325 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Mar 25 02:31:05.495330 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Mar 25 02:31:05.495335 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Mar 25 02:31:05.495341 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Mar 25 02:31:05.495346 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Mar 25 02:31:05.495351 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Mar 25 02:31:05.495356 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Mar 25 02:31:05.495361 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Mar 25 02:31:05.495367 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xfca 01072009 AMI 00010013) Mar 25 02:31:05.495372 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Mar 25 02:31:05.495377 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Mar 25 02:31:05.495383 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Mar 25 02:31:05.495388 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Mar 25 02:31:05.495393 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Mar 25 02:31:05.495398 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Mar 25 02:31:05.495403 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Mar 25 02:31:05.495408 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Mar 25 02:31:05.495413 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Mar 25 02:31:05.495418 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Mar 25 02:31:05.495423 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Mar 25 02:31:05.495428 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Mar 25 02:31:05.495433 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Mar 25 02:31:05.495438 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Mar 25 02:31:05.495443 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Mar 25 02:31:05.495448 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Mar 25 02:31:05.495453 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Mar 25 02:31:05.495458 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Mar 25 02:31:05.495463 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Mar 25 02:31:05.495468 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Mar 25 02:31:05.495473 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Mar 25 02:31:05.495478 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Mar 25 02:31:05.495483 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Mar 25 02:31:05.495488 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Mar 25 02:31:05.495493 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Mar 25 02:31:05.495498 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Mar 25 02:31:05.495503 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Mar 25 02:31:05.495508 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Mar 25 02:31:05.495513 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Mar 25 02:31:05.495518 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Mar 25 02:31:05.495524 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Mar 25 02:31:05.495529 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Mar 25 02:31:05.495534 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Mar 25 02:31:05.495538 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Mar 25 02:31:05.495543 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Mar 25 02:31:05.495548 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Mar 25 02:31:05.495553 kernel: No NUMA configuration found Mar 25 02:31:05.495558 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Mar 25 02:31:05.495568 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Mar 25 02:31:05.495597 kernel: Zone ranges: Mar 25 02:31:05.495603 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 02:31:05.495608 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 25 02:31:05.495628 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Mar 25 02:31:05.495633 kernel: Movable zone start for each node Mar 25 02:31:05.495651 kernel: Early memory node ranges Mar 25 02:31:05.495656 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Mar 25 02:31:05.495661 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Mar 25 02:31:05.495666 kernel: node 0: [mem 0x0000000040400000-0x0000000062034fff] Mar 25 02:31:05.495672 kernel: node 0: [mem 0x0000000062037000-0x000000006c0c4fff] Mar 25 02:31:05.495677 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Mar 25 02:31:05.495682 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Mar 25 02:31:05.495687 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Mar 25 02:31:05.495696 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Mar 25 02:31:05.495702 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 02:31:05.495707 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Mar 25 02:31:05.495713 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 25 02:31:05.495719 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Mar 25 02:31:05.495724 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Mar 25 02:31:05.495729 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Mar 25 02:31:05.495735 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Mar 25 02:31:05.495740 kernel: ACPI: PM-Timer IO Port: 0x1808 Mar 25 02:31:05.495746 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Mar 25 02:31:05.495751 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Mar 25 02:31:05.495756 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Mar 25 02:31:05.495762 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Mar 25 02:31:05.495768 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Mar 25 02:31:05.495773 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Mar 25 02:31:05.495778 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Mar 25 02:31:05.495783 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Mar 25 02:31:05.495789 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Mar 25 02:31:05.495794 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Mar 25 02:31:05.495799 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Mar 25 02:31:05.495804 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Mar 25 02:31:05.495809 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Mar 25 02:31:05.495815 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Mar 25 02:31:05.495821 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Mar 25 02:31:05.495826 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Mar 25 02:31:05.495831 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Mar 25 02:31:05.495837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 25 02:31:05.495842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 02:31:05.495847 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 02:31:05.495852 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 25 02:31:05.495858 kernel: TSC deadline timer available Mar 25 02:31:05.495863 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Mar 25 02:31:05.495869 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Mar 25 02:31:05.495875 kernel: Booting paravirtualized kernel on bare hardware Mar 25 02:31:05.495880 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 02:31:05.495886 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 25 02:31:05.495891 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Mar 25 02:31:05.495897 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Mar 25 02:31:05.495902 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 25 02:31:05.495908 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:31:05.495914 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 02:31:05.495919 kernel: random: crng init done Mar 25 02:31:05.495925 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Mar 25 02:31:05.495930 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Mar 25 02:31:05.495935 kernel: Fallback order for Node 0: 0 Mar 25 02:31:05.495941 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Mar 25 02:31:05.495946 kernel: Policy zone: Normal Mar 25 02:31:05.495951 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 02:31:05.495957 kernel: software IO TLB: area num 16. Mar 25 02:31:05.495963 kernel: Memory: 32547216K/33281940K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 734464K reserved, 0K cma-reserved) Mar 25 02:31:05.495968 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 25 02:31:05.495974 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 02:31:05.495979 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 02:31:05.495984 kernel: Dynamic Preempt: voluntary Mar 25 02:31:05.495990 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 02:31:05.495995 kernel: rcu: RCU event tracing is enabled. Mar 25 02:31:05.496001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 25 02:31:05.496007 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 02:31:05.496012 kernel: Rude variant of Tasks RCU enabled. Mar 25 02:31:05.496018 kernel: Tracing variant of Tasks RCU enabled. Mar 25 02:31:05.496023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 02:31:05.496028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 25 02:31:05.496033 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Mar 25 02:31:05.496039 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 02:31:05.496044 kernel: Console: colour VGA+ 80x25 Mar 25 02:31:05.496049 kernel: printk: console [tty0] enabled Mar 25 02:31:05.496055 kernel: printk: console [ttyS1] enabled Mar 25 02:31:05.496061 kernel: ACPI: Core revision 20230628 Mar 25 02:31:05.496067 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Mar 25 02:31:05.496072 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 02:31:05.496077 kernel: DMAR: Host address width 39 Mar 25 02:31:05.496083 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Mar 25 02:31:05.496088 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Mar 25 02:31:05.496093 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Mar 25 02:31:05.496099 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Mar 25 02:31:05.496104 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Mar 25 02:31:05.496110 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Mar 25 02:31:05.496115 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Mar 25 02:31:05.496121 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Mar 25 02:31:05.496126 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Mar 25 02:31:05.496131 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Mar 25 02:31:05.496137 kernel: x2apic enabled Mar 25 02:31:05.496142 kernel: APIC: Switched APIC routing to: cluster x2apic Mar 25 02:31:05.496147 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 25 02:31:05.496153 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Mar 25 02:31:05.496159 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Mar 25 02:31:05.496164 kernel: CPU0: Thermal monitoring enabled (TM1) Mar 25 02:31:05.496170 kernel: process: using mwait in idle threads Mar 25 02:31:05.496175 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 25 02:31:05.496180 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 25 02:31:05.496186 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 02:31:05.496191 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 25 02:31:05.496196 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 25 02:31:05.496202 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 25 02:31:05.496208 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 02:31:05.496213 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Mar 25 02:31:05.496219 kernel: RETBleed: Mitigation: Enhanced IBRS Mar 25 02:31:05.496224 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 25 02:31:05.496229 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 25 02:31:05.496235 kernel: TAA: Mitigation: TSX disabled Mar 25 02:31:05.496240 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Mar 25 02:31:05.496245 kernel: SRBDS: Mitigation: Microcode Mar 25 02:31:05.496252 kernel: GDS: Mitigation: Microcode Mar 25 02:31:05.496257 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 02:31:05.496262 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 02:31:05.496267 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 02:31:05.496273 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 25 02:31:05.496278 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 25 02:31:05.496283 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 02:31:05.496289 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 25 02:31:05.496294 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 25 02:31:05.496300 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Mar 25 02:31:05.496306 kernel: Freeing SMP alternatives memory: 32K Mar 25 02:31:05.496311 kernel: pid_max: default: 32768 minimum: 301 Mar 25 02:31:05.496316 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 02:31:05.496322 kernel: landlock: Up and running. Mar 25 02:31:05.496327 kernel: SELinux: Initializing. Mar 25 02:31:05.496332 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 02:31:05.496338 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 02:31:05.496343 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Mar 25 02:31:05.496349 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 25 02:31:05.496355 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 25 02:31:05.496360 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 25 02:31:05.496366 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Mar 25 02:31:05.496371 kernel: ... version: 4 Mar 25 02:31:05.496376 kernel: ... bit width: 48 Mar 25 02:31:05.496382 kernel: ... generic registers: 4 Mar 25 02:31:05.496387 kernel: ... value mask: 0000ffffffffffff Mar 25 02:31:05.496392 kernel: ... max period: 00007fffffffffff Mar 25 02:31:05.496398 kernel: ... fixed-purpose events: 3 Mar 25 02:31:05.496404 kernel: ... event mask: 000000070000000f Mar 25 02:31:05.496409 kernel: signal: max sigframe size: 2032 Mar 25 02:31:05.496414 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Mar 25 02:31:05.496420 kernel: rcu: Hierarchical SRCU implementation. Mar 25 02:31:05.496425 kernel: rcu: Max phase no-delay instances is 400. Mar 25 02:31:05.496430 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Mar 25 02:31:05.496436 kernel: smp: Bringing up secondary CPUs ... Mar 25 02:31:05.496441 kernel: smpboot: x86: Booting SMP configuration: Mar 25 02:31:05.496447 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Mar 25 02:31:05.496453 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 25 02:31:05.496458 kernel: smp: Brought up 1 node, 16 CPUs Mar 25 02:31:05.496464 kernel: smpboot: Max logical packages: 1 Mar 25 02:31:05.496469 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Mar 25 02:31:05.496474 kernel: devtmpfs: initialized Mar 25 02:31:05.496480 kernel: x86/mm: Memory block size: 128MB Mar 25 02:31:05.496485 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x62035000-0x62035fff] (4096 bytes) Mar 25 02:31:05.496491 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Mar 25 02:31:05.496497 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 02:31:05.496502 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 25 02:31:05.496508 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 02:31:05.496513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 02:31:05.496518 kernel: audit: initializing netlink subsys (disabled) Mar 25 02:31:05.496524 kernel: audit: type=2000 audit(1742869859.127:1): state=initialized audit_enabled=0 res=1 Mar 25 02:31:05.496529 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 02:31:05.496534 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 02:31:05.496540 kernel: cpuidle: using governor menu Mar 25 02:31:05.496546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 02:31:05.496551 kernel: dca service started, version 1.12.1 Mar 25 02:31:05.496557 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 25 02:31:05.496564 kernel: PCI: Using configuration type 1 for base access Mar 25 02:31:05.496569 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Mar 25 02:31:05.496575 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 02:31:05.496606 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 02:31:05.496612 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 02:31:05.496617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 02:31:05.496638 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 02:31:05.496657 kernel: ACPI: Added _OSI(Module Device) Mar 25 02:31:05.496662 kernel: ACPI: Added _OSI(Processor Device) Mar 25 02:31:05.496668 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 02:31:05.496673 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 02:31:05.496678 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Mar 25 02:31:05.496684 kernel: ACPI: Dynamic OEM Table Load: Mar 25 02:31:05.496689 kernel: ACPI: SSDT 0xFFFF8FC1420E0000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Mar 25 02:31:05.496694 kernel: ACPI: Dynamic OEM Table Load: Mar 25 02:31:05.496701 kernel: ACPI: SSDT 0xFFFF8FC1420D8800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Mar 25 02:31:05.496706 kernel: ACPI: Dynamic OEM Table Load: Mar 25 02:31:05.496711 kernel: ACPI: SSDT 0xFFFF8FC141789200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Mar 25 02:31:05.496716 kernel: ACPI: Dynamic OEM Table Load: Mar 25 02:31:05.496722 kernel: ACPI: SSDT 0xFFFF8FC1420DD000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Mar 25 02:31:05.496727 kernel: ACPI: Dynamic OEM Table Load: Mar 25 02:31:05.496732 kernel: ACPI: SSDT 0xFFFF8FC1420EB000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Mar 25 02:31:05.496737 kernel: ACPI: Dynamic OEM Table Load: Mar 25 02:31:05.496742 kernel: ACPI: SSDT 0xFFFF8FC14105C800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Mar 25 02:31:05.496749 kernel: ACPI: _OSC evaluated successfully for all CPUs Mar 25 02:31:05.496754 kernel: ACPI: Interpreter enabled Mar 25 02:31:05.496759 kernel: ACPI: PM: (supports S0 S5) Mar 25 02:31:05.496765 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 02:31:05.496770 kernel: HEST: Enabling Firmware First mode for corrected errors. Mar 25 02:31:05.496775 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Mar 25 02:31:05.496780 kernel: HEST: Table parsing has been initialized. Mar 25 02:31:05.496786 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Mar 25 02:31:05.496791 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 02:31:05.496797 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 02:31:05.496803 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Mar 25 02:31:05.496808 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Mar 25 02:31:05.496814 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Mar 25 02:31:05.496819 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Mar 25 02:31:05.496824 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Mar 25 02:31:05.496830 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Mar 25 02:31:05.496835 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Mar 25 02:31:05.496840 kernel: ACPI: \_TZ_.FN00: New power resource Mar 25 02:31:05.496846 kernel: ACPI: \_TZ_.FN01: New power resource Mar 25 02:31:05.496852 kernel: ACPI: \_TZ_.FN02: New power resource Mar 25 02:31:05.496857 kernel: ACPI: \_TZ_.FN03: New power resource Mar 25 02:31:05.496862 kernel: ACPI: \_TZ_.FN04: New power resource Mar 25 02:31:05.496867 kernel: ACPI: \PIN_: New power resource Mar 25 02:31:05.496873 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Mar 25 02:31:05.496946 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 25 02:31:05.496998 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Mar 25 02:31:05.497048 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Mar 25 02:31:05.497056 kernel: PCI host bridge to bus 0000:00 Mar 25 02:31:05.497108 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 02:31:05.497152 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 02:31:05.497195 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 02:31:05.497237 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Mar 25 02:31:05.497280 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Mar 25 02:31:05.497324 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Mar 25 02:31:05.497383 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Mar 25 02:31:05.497443 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Mar 25 02:31:05.497495 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.497549 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Mar 25 02:31:05.497659 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.497716 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Mar 25 02:31:05.497765 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Mar 25 02:31:05.497813 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Mar 25 02:31:05.497861 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Mar 25 02:31:05.497914 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Mar 25 02:31:05.497962 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Mar 25 02:31:05.498015 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Mar 25 02:31:05.498066 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Mar 25 02:31:05.498119 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Mar 25 02:31:05.498168 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Mar 25 02:31:05.498216 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Mar 25 02:31:05.498271 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Mar 25 02:31:05.498327 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Mar 25 02:31:05.498379 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Mar 25 02:31:05.498432 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Mar 25 02:31:05.498480 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Mar 25 02:31:05.498533 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Mar 25 02:31:05.498610 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Mar 25 02:31:05.498694 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Mar 25 02:31:05.498745 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Mar 25 02:31:05.498794 kernel: pci 0000:00:16.0: PME# supported from D3hot Mar 25 02:31:05.498845 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Mar 25 02:31:05.498894 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Mar 25 02:31:05.498943 kernel: pci 0000:00:16.1: PME# supported from D3hot Mar 25 02:31:05.498994 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Mar 25 02:31:05.499044 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Mar 25 02:31:05.499094 kernel: pci 0000:00:16.4: PME# supported from D3hot Mar 25 02:31:05.499149 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Mar 25 02:31:05.499197 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Mar 25 02:31:05.499246 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Mar 25 02:31:05.499298 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Mar 25 02:31:05.499345 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Mar 25 02:31:05.499394 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Mar 25 02:31:05.499442 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Mar 25 02:31:05.499505 kernel: pci 0000:00:17.0: PME# supported from D3hot Mar 25 02:31:05.499563 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Mar 25 02:31:05.499655 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.499709 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Mar 25 02:31:05.499760 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.499813 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Mar 25 02:31:05.499863 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.499917 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Mar 25 02:31:05.499969 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.500023 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Mar 25 02:31:05.500073 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.500128 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Mar 25 02:31:05.500178 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Mar 25 02:31:05.500230 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Mar 25 02:31:05.500284 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Mar 25 02:31:05.500335 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Mar 25 02:31:05.500385 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Mar 25 02:31:05.500440 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Mar 25 02:31:05.500490 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Mar 25 02:31:05.500540 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 25 02:31:05.500601 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Mar 25 02:31:05.500656 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Mar 25 02:31:05.500706 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Mar 25 02:31:05.500758 kernel: pci 0000:02:00.0: PME# supported from D3cold Mar 25 02:31:05.500808 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Mar 25 02:31:05.500859 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Mar 25 02:31:05.500917 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Mar 25 02:31:05.500967 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Mar 25 02:31:05.501021 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Mar 25 02:31:05.501071 kernel: pci 0000:02:00.1: PME# supported from D3cold Mar 25 02:31:05.501121 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Mar 25 02:31:05.501172 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Mar 25 02:31:05.501222 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Mar 25 02:31:05.501272 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Mar 25 02:31:05.501321 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Mar 25 02:31:05.501371 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Mar 25 02:31:05.501428 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Mar 25 02:31:05.501480 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Mar 25 02:31:05.501531 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Mar 25 02:31:05.501586 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Mar 25 02:31:05.501677 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Mar 25 02:31:05.501727 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.501778 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Mar 25 02:31:05.501831 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Mar 25 02:31:05.501881 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Mar 25 02:31:05.501939 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Mar 25 02:31:05.501990 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Mar 25 02:31:05.502041 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Mar 25 02:31:05.502091 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Mar 25 02:31:05.502142 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Mar 25 02:31:05.502196 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Mar 25 02:31:05.502247 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Mar 25 02:31:05.502297 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Mar 25 02:31:05.502346 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Mar 25 02:31:05.502397 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Mar 25 02:31:05.502452 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Mar 25 02:31:05.502504 kernel: pci 0000:07:00.0: enabling Extended Tags Mar 25 02:31:05.502555 kernel: pci 0000:07:00.0: supports D1 D2 Mar 25 02:31:05.502644 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 25 02:31:05.502695 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Mar 25 02:31:05.502744 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Mar 25 02:31:05.502794 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Mar 25 02:31:05.502849 kernel: pci_bus 0000:08: extended config space not accessible Mar 25 02:31:05.502909 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Mar 25 02:31:05.502963 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Mar 25 02:31:05.503018 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Mar 25 02:31:05.503074 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Mar 25 02:31:05.503127 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 02:31:05.503179 kernel: pci 0000:08:00.0: supports D1 D2 Mar 25 02:31:05.503232 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 25 02:31:05.503283 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Mar 25 02:31:05.503335 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Mar 25 02:31:05.503389 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Mar 25 02:31:05.503398 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Mar 25 02:31:05.503404 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Mar 25 02:31:05.503410 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Mar 25 02:31:05.503416 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Mar 25 02:31:05.503421 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Mar 25 02:31:05.503427 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Mar 25 02:31:05.503433 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Mar 25 02:31:05.503439 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Mar 25 02:31:05.503446 kernel: iommu: Default domain type: Translated Mar 25 02:31:05.503452 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 02:31:05.503458 kernel: PCI: Using ACPI for IRQ routing Mar 25 02:31:05.503463 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 02:31:05.503469 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Mar 25 02:31:05.503475 kernel: e820: reserve RAM buffer [mem 0x62035000-0x63ffffff] Mar 25 02:31:05.503480 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Mar 25 02:31:05.503486 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Mar 25 02:31:05.503491 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Mar 25 02:31:05.503543 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Mar 25 02:31:05.503638 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Mar 25 02:31:05.503692 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 02:31:05.503701 kernel: vgaarb: loaded Mar 25 02:31:05.503707 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 25 02:31:05.503713 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Mar 25 02:31:05.503719 kernel: clocksource: Switched to clocksource tsc-early Mar 25 02:31:05.503724 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 02:31:05.503730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 02:31:05.503738 kernel: pnp: PnP ACPI init Mar 25 02:31:05.503790 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Mar 25 02:31:05.503844 kernel: pnp 00:02: [dma 0 disabled] Mar 25 02:31:05.503892 kernel: pnp 00:03: [dma 0 disabled] Mar 25 02:31:05.503940 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Mar 25 02:31:05.503986 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Mar 25 02:31:05.504036 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Mar 25 02:31:05.504088 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Mar 25 02:31:05.504134 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Mar 25 02:31:05.504180 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Mar 25 02:31:05.504224 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Mar 25 02:31:05.504270 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Mar 25 02:31:05.504315 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Mar 25 02:31:05.504362 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Mar 25 02:31:05.504407 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Mar 25 02:31:05.504455 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Mar 25 02:31:05.504501 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Mar 25 02:31:05.504545 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Mar 25 02:31:05.504595 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Mar 25 02:31:05.504681 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Mar 25 02:31:05.504729 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Mar 25 02:31:05.504774 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Mar 25 02:31:05.504824 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Mar 25 02:31:05.504833 kernel: pnp: PnP ACPI: found 10 devices Mar 25 02:31:05.504839 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 02:31:05.504845 kernel: NET: Registered PF_INET protocol family Mar 25 02:31:05.504851 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 02:31:05.504857 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 25 02:31:05.504864 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 02:31:05.504870 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 02:31:05.504876 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 25 02:31:05.504882 kernel: TCP: Hash tables configured (established 262144 bind 65536) Mar 25 02:31:05.504888 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 25 02:31:05.504894 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 25 02:31:05.504899 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 02:31:05.504905 kernel: NET: Registered PF_XDP protocol family Mar 25 02:31:05.504956 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Mar 25 02:31:05.505006 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Mar 25 02:31:05.505057 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Mar 25 02:31:05.505106 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Mar 25 02:31:05.505159 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Mar 25 02:31:05.505213 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Mar 25 02:31:05.505266 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Mar 25 02:31:05.505319 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Mar 25 02:31:05.505368 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Mar 25 02:31:05.505418 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Mar 25 02:31:05.505468 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Mar 25 02:31:05.505518 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Mar 25 02:31:05.505571 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Mar 25 02:31:05.505623 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Mar 25 02:31:05.505673 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Mar 25 02:31:05.505722 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Mar 25 02:31:05.505772 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Mar 25 02:31:05.505822 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Mar 25 02:31:05.505873 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Mar 25 02:31:05.505924 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Mar 25 02:31:05.505975 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Mar 25 02:31:05.506026 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Mar 25 02:31:05.506079 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Mar 25 02:31:05.506130 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Mar 25 02:31:05.506179 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Mar 25 02:31:05.506225 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Mar 25 02:31:05.506269 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 02:31:05.506313 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 02:31:05.506357 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 02:31:05.506401 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Mar 25 02:31:05.506448 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Mar 25 02:31:05.506497 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Mar 25 02:31:05.506544 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Mar 25 02:31:05.506618 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Mar 25 02:31:05.506680 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Mar 25 02:31:05.506728 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 25 02:31:05.506777 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Mar 25 02:31:05.506827 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Mar 25 02:31:05.506872 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Mar 25 02:31:05.506923 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Mar 25 02:31:05.506970 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Mar 25 02:31:05.506978 kernel: PCI: CLS 64 bytes, default 64 Mar 25 02:31:05.506984 kernel: DMAR: No ATSR found Mar 25 02:31:05.506990 kernel: DMAR: No SATC found Mar 25 02:31:05.506997 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Mar 25 02:31:05.507003 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Mar 25 02:31:05.507009 kernel: DMAR: IOMMU feature nwfs inconsistent Mar 25 02:31:05.507015 kernel: DMAR: IOMMU feature pasid inconsistent Mar 25 02:31:05.507021 kernel: DMAR: IOMMU feature eafs inconsistent Mar 25 02:31:05.507026 kernel: DMAR: IOMMU feature prs inconsistent Mar 25 02:31:05.507032 kernel: DMAR: IOMMU feature nest inconsistent Mar 25 02:31:05.507038 kernel: DMAR: IOMMU feature mts inconsistent Mar 25 02:31:05.507044 kernel: DMAR: IOMMU feature sc_support inconsistent Mar 25 02:31:05.507050 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Mar 25 02:31:05.507056 kernel: DMAR: dmar0: Using Queued invalidation Mar 25 02:31:05.507062 kernel: DMAR: dmar1: Using Queued invalidation Mar 25 02:31:05.507113 kernel: pci 0000:00:02.0: Adding to iommu group 0 Mar 25 02:31:05.507165 kernel: pci 0000:00:00.0: Adding to iommu group 1 Mar 25 02:31:05.507216 kernel: pci 0000:00:01.0: Adding to iommu group 2 Mar 25 02:31:05.507266 kernel: pci 0000:00:01.1: Adding to iommu group 2 Mar 25 02:31:05.507316 kernel: pci 0000:00:08.0: Adding to iommu group 3 Mar 25 02:31:05.507369 kernel: pci 0000:00:12.0: Adding to iommu group 4 Mar 25 02:31:05.507418 kernel: pci 0000:00:14.0: Adding to iommu group 5 Mar 25 02:31:05.507470 kernel: pci 0000:00:14.2: Adding to iommu group 5 Mar 25 02:31:05.507518 kernel: pci 0000:00:15.0: Adding to iommu group 6 Mar 25 02:31:05.507572 kernel: pci 0000:00:15.1: Adding to iommu group 6 Mar 25 02:31:05.507658 kernel: pci 0000:00:16.0: Adding to iommu group 7 Mar 25 02:31:05.507708 kernel: pci 0000:00:16.1: Adding to iommu group 7 Mar 25 02:31:05.507758 kernel: pci 0000:00:16.4: Adding to iommu group 7 Mar 25 02:31:05.507807 kernel: pci 0000:00:17.0: Adding to iommu group 8 Mar 25 02:31:05.507860 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Mar 25 02:31:05.507909 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Mar 25 02:31:05.507960 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Mar 25 02:31:05.508009 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Mar 25 02:31:05.508060 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Mar 25 02:31:05.508109 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Mar 25 02:31:05.508158 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Mar 25 02:31:05.508208 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Mar 25 02:31:05.508260 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Mar 25 02:31:05.508311 kernel: pci 0000:02:00.0: Adding to iommu group 2 Mar 25 02:31:05.508362 kernel: pci 0000:02:00.1: Adding to iommu group 2 Mar 25 02:31:05.508413 kernel: pci 0000:04:00.0: Adding to iommu group 16 Mar 25 02:31:05.508464 kernel: pci 0000:05:00.0: Adding to iommu group 17 Mar 25 02:31:05.508514 kernel: pci 0000:07:00.0: Adding to iommu group 18 Mar 25 02:31:05.508570 kernel: pci 0000:08:00.0: Adding to iommu group 18 Mar 25 02:31:05.508579 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Mar 25 02:31:05.508587 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 25 02:31:05.508593 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Mar 25 02:31:05.508599 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Mar 25 02:31:05.508605 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Mar 25 02:31:05.508610 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Mar 25 02:31:05.508616 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Mar 25 02:31:05.508622 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Mar 25 02:31:05.508676 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Mar 25 02:31:05.508687 kernel: Initialise system trusted keyrings Mar 25 02:31:05.508692 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Mar 25 02:31:05.508698 kernel: Key type asymmetric registered Mar 25 02:31:05.508704 kernel: Asymmetric key parser 'x509' registered Mar 25 02:31:05.508709 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 02:31:05.508715 kernel: io scheduler mq-deadline registered Mar 25 02:31:05.508721 kernel: io scheduler kyber registered Mar 25 02:31:05.508727 kernel: io scheduler bfq registered Mar 25 02:31:05.508777 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Mar 25 02:31:05.508830 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Mar 25 02:31:05.508879 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Mar 25 02:31:05.508930 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Mar 25 02:31:05.508980 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Mar 25 02:31:05.509029 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Mar 25 02:31:05.509078 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Mar 25 02:31:05.509134 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Mar 25 02:31:05.509145 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Mar 25 02:31:05.509151 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Mar 25 02:31:05.509157 kernel: pstore: Using crash dump compression: deflate Mar 25 02:31:05.509163 kernel: pstore: Registered erst as persistent store backend Mar 25 02:31:05.509169 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 02:31:05.509175 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 02:31:05.509181 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 02:31:05.509187 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 25 02:31:05.509236 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Mar 25 02:31:05.509246 kernel: i8042: PNP: No PS/2 controller found. Mar 25 02:31:05.509292 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Mar 25 02:31:05.509340 kernel: rtc_cmos rtc_cmos: registered as rtc0 Mar 25 02:31:05.509385 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-03-25T02:31:04 UTC (1742869864) Mar 25 02:31:05.509431 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Mar 25 02:31:05.509440 kernel: intel_pstate: Intel P-state driver initializing Mar 25 02:31:05.509446 kernel: intel_pstate: Disabling energy efficiency optimization Mar 25 02:31:05.509453 kernel: intel_pstate: HWP enabled Mar 25 02:31:05.509459 kernel: NET: Registered PF_INET6 protocol family Mar 25 02:31:05.509465 kernel: Segment Routing with IPv6 Mar 25 02:31:05.509471 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 02:31:05.509476 kernel: NET: Registered PF_PACKET protocol family Mar 25 02:31:05.509482 kernel: Key type dns_resolver registered Mar 25 02:31:05.509488 kernel: microcode: Current revision: 0x00000102 Mar 25 02:31:05.509494 kernel: microcode: Updated early from: 0x000000de Mar 25 02:31:05.509499 kernel: microcode: Microcode Update Driver: v2.2. Mar 25 02:31:05.509506 kernel: IPI shorthand broadcast: enabled Mar 25 02:31:05.509512 kernel: sched_clock: Marking stable (2736019077, 1443555148)->(4685789598, -506215373) Mar 25 02:31:05.509518 kernel: registered taskstats version 1 Mar 25 02:31:05.509524 kernel: Loading compiled-in X.509 certificates Mar 25 02:31:05.509529 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 02:31:05.509535 kernel: Key type .fscrypt registered Mar 25 02:31:05.509541 kernel: Key type fscrypt-provisioning registered Mar 25 02:31:05.509546 kernel: ima: Allocated hash algorithm: sha1 Mar 25 02:31:05.509552 kernel: ima: No architecture policies found Mar 25 02:31:05.509561 kernel: clk: Disabling unused clocks Mar 25 02:31:05.509567 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 02:31:05.509573 kernel: Write protecting the kernel read-only data: 40960k Mar 25 02:31:05.509579 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 02:31:05.509584 kernel: Run /init as init process Mar 25 02:31:05.509590 kernel: with arguments: Mar 25 02:31:05.509599 kernel: /init Mar 25 02:31:05.509606 kernel: with environment: Mar 25 02:31:05.509637 kernel: HOME=/ Mar 25 02:31:05.509659 kernel: TERM=linux Mar 25 02:31:05.509666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 02:31:05.509674 systemd[1]: Successfully made /usr/ read-only. Mar 25 02:31:05.509682 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 02:31:05.509722 systemd[1]: Detected architecture x86-64. Mar 25 02:31:05.509731 systemd[1]: Running in initrd. Mar 25 02:31:05.509740 systemd[1]: No hostname configured, using default hostname. Mar 25 02:31:05.509766 systemd[1]: Hostname set to . Mar 25 02:31:05.509796 systemd[1]: Initializing machine ID from random generator. Mar 25 02:31:05.509817 systemd[1]: Queued start job for default target initrd.target. Mar 25 02:31:05.509823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:31:05.509857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:31:05.509880 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 02:31:05.509906 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 02:31:05.509914 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 02:31:05.509945 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 02:31:05.509956 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 02:31:05.509962 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 02:31:05.509969 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:31:05.509989 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:31:05.509995 systemd[1]: Reached target paths.target - Path Units. Mar 25 02:31:05.510001 systemd[1]: Reached target slices.target - Slice Units. Mar 25 02:31:05.510009 systemd[1]: Reached target swap.target - Swaps. Mar 25 02:31:05.510015 systemd[1]: Reached target timers.target - Timer Units. Mar 25 02:31:05.510021 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 02:31:05.510027 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 02:31:05.510033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 02:31:05.510039 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 02:31:05.510045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:31:05.510051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 02:31:05.510057 kernel: tsc: Refined TSC clocksource calibration: 3408.002 MHz Mar 25 02:31:05.510064 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd628962, max_idle_ns: 440795345332 ns Mar 25 02:31:05.510070 kernel: clocksource: Switched to clocksource tsc Mar 25 02:31:05.510076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:31:05.510082 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 02:31:05.510088 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 02:31:05.510094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 02:31:05.510100 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 02:31:05.510106 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 02:31:05.510112 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 02:31:05.510130 systemd-journald[268]: Collecting audit messages is disabled. Mar 25 02:31:05.510146 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 02:31:05.510153 systemd-journald[268]: Journal started Mar 25 02:31:05.510168 systemd-journald[268]: Runtime Journal (/run/log/journal/aa4b3e85408e464e99541923a42e364f) is 8M, max 636.5M, 628.5M free. Mar 25 02:31:05.511551 systemd-modules-load[271]: Inserted module 'overlay' Mar 25 02:31:05.528664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:31:05.550612 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 02:31:05.556615 kernel: Bridge firewalling registered Mar 25 02:31:05.556648 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 02:31:05.563470 systemd-modules-load[271]: Inserted module 'br_netfilter' Mar 25 02:31:05.575085 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 02:31:05.584101 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:31:05.584387 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 02:31:05.584476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 02:31:05.585521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:31:05.585956 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 02:31:05.586298 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 02:31:05.619888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:31:05.713406 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:31:05.732929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 02:31:05.753211 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:31:05.782052 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:31:05.800449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 02:31:05.818154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 02:31:05.843812 systemd-resolved[296]: Positive Trust Anchors: Mar 25 02:31:05.843819 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 02:31:05.843860 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 02:31:05.846594 systemd-resolved[296]: Defaulting to hostname 'linux'. Mar 25 02:31:05.847434 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 02:31:05.851955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:31:05.856279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:31:05.867937 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:31:05.879483 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 02:31:05.945818 dracut-cmdline[311]: dracut-dracut-053 Mar 25 02:31:05.952806 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:31:06.134594 kernel: SCSI subsystem initialized Mar 25 02:31:06.145611 kernel: Loading iSCSI transport class v2.0-870. Mar 25 02:31:06.157634 kernel: iscsi: registered transport (tcp) Mar 25 02:31:06.178593 kernel: iscsi: registered transport (qla4xxx) Mar 25 02:31:06.178610 kernel: QLogic iSCSI HBA Driver Mar 25 02:31:06.201558 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 02:31:06.202463 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 02:31:06.280817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 02:31:06.280845 kernel: device-mapper: uevent: version 1.0.3 Mar 25 02:31:06.289629 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 02:31:06.325616 kernel: raid6: avx2x4 gen() 46886 MB/s Mar 25 02:31:06.346600 kernel: raid6: avx2x2 gen() 53806 MB/s Mar 25 02:31:06.372692 kernel: raid6: avx2x1 gen() 45229 MB/s Mar 25 02:31:06.372708 kernel: raid6: using algorithm avx2x2 gen() 53806 MB/s Mar 25 02:31:06.399793 kernel: raid6: .... xor() 32426 MB/s, rmw enabled Mar 25 02:31:06.399811 kernel: raid6: using avx2x2 recovery algorithm Mar 25 02:31:06.420594 kernel: xor: automatically using best checksumming function avx Mar 25 02:31:06.518601 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 02:31:06.524328 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 02:31:06.535762 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:31:06.578329 systemd-udevd[498]: Using default interface naming scheme 'v255'. Mar 25 02:31:06.581167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:31:06.600267 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 02:31:06.645750 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Mar 25 02:31:06.663197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 02:31:06.675827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 02:31:06.803340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:31:06.830044 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 25 02:31:06.830081 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 25 02:31:06.836564 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 02:31:06.848599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 02:31:07.087617 kernel: ACPI: bus type USB registered Mar 25 02:31:07.087634 kernel: usbcore: registered new interface driver usbfs Mar 25 02:31:07.087642 kernel: usbcore: registered new interface driver hub Mar 25 02:31:07.087649 kernel: usbcore: registered new device driver usb Mar 25 02:31:07.087657 kernel: PTP clock support registered Mar 25 02:31:07.087667 kernel: libata version 3.00 loaded. Mar 25 02:31:07.087675 kernel: ahci 0000:00:17.0: version 3.0 Mar 25 02:31:07.087764 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 02:31:07.087773 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Mar 25 02:31:07.087838 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Mar 25 02:31:07.087966 kernel: AES CTR mode by8 optimization enabled Mar 25 02:31:07.087976 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Mar 25 02:31:07.088093 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Mar 25 02:31:07.088157 kernel: scsi host0: ahci Mar 25 02:31:07.088219 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Mar 25 02:31:07.088280 kernel: scsi host1: ahci Mar 25 02:31:07.088340 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Mar 25 02:31:07.088400 kernel: scsi host2: ahci Mar 25 02:31:07.088458 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Mar 25 02:31:07.088519 kernel: scsi host3: ahci Mar 25 02:31:07.088608 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Mar 25 02:31:07.088684 kernel: scsi host4: ahci Mar 25 02:31:07.088745 kernel: hub 1-0:1.0: USB hub found Mar 25 02:31:07.088815 kernel: scsi host5: ahci Mar 25 02:31:07.088874 kernel: hub 1-0:1.0: 16 ports detected Mar 25 02:31:07.088938 kernel: scsi host6: ahci Mar 25 02:31:07.088998 kernel: hub 2-0:1.0: USB hub found Mar 25 02:31:07.089066 kernel: scsi host7: ahci Mar 25 02:31:07.089127 kernel: hub 2-0:1.0: 10 ports detected Mar 25 02:31:07.089190 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Mar 25 02:31:07.089198 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Mar 25 02:31:07.089206 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Mar 25 02:31:07.089213 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Mar 25 02:31:07.089221 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Mar 25 02:31:07.089229 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Mar 25 02:31:07.089236 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Mar 25 02:31:07.089243 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Mar 25 02:31:07.089250 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Mar 25 02:31:07.089256 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Mar 25 02:31:07.089263 kernel: igb 0000:04:00.0: added PHC on eth0 Mar 25 02:31:07.100863 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Mar 25 02:31:07.100930 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:4c Mar 25 02:31:07.100996 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Mar 25 02:31:07.101060 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Mar 25 02:31:06.873971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 02:31:06.874079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:31:07.185644 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Mar 25 02:31:07.664006 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Mar 25 02:31:07.664089 kernel: igb 0000:05:00.0: added PHC on eth1 Mar 25 02:31:07.664159 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Mar 25 02:31:07.664226 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:4d Mar 25 02:31:07.664290 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Mar 25 02:31:07.664355 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Mar 25 02:31:07.664423 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Mar 25 02:31:07.797991 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 25 02:31:07.798003 kernel: hub 1-14:1.0: USB hub found Mar 25 02:31:07.798092 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 25 02:31:07.798101 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Mar 25 02:31:07.798176 kernel: hub 1-14:1.0: 4 ports detected Mar 25 02:31:07.798247 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 25 02:31:07.798259 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 25 02:31:07.798267 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 25 02:31:07.798274 kernel: ata7: SATA link down (SStatus 0 SControl 300) Mar 25 02:31:07.798281 kernel: ata8: SATA link down (SStatus 0 SControl 300) Mar 25 02:31:07.798288 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 25 02:31:07.798295 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Mar 25 02:31:07.798303 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Mar 25 02:31:07.798310 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Mar 25 02:31:07.798317 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Mar 25 02:31:07.798326 kernel: ata2.00: Features: NCQ-prio Mar 25 02:31:07.798333 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Mar 25 02:31:07.798401 kernel: ata1.00: Features: NCQ-prio Mar 25 02:31:07.798409 kernel: ata2.00: configured for UDMA/133 Mar 25 02:31:07.798417 kernel: ata1.00: configured for UDMA/133 Mar 25 02:31:07.798424 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Mar 25 02:31:07.798572 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Mar 25 02:31:07.798638 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Mar 25 02:31:07.798709 kernel: ata1.00: Enabling discard_zeroes_data Mar 25 02:31:07.798717 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Mar 25 02:31:07.798788 kernel: ata2.00: Enabling discard_zeroes_data Mar 25 02:31:07.798796 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Mar 25 02:31:07.798860 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Mar 25 02:31:07.798921 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Mar 25 02:31:07.798981 kernel: sd 1:0:0:0: [sdb] Write Protect is off Mar 25 02:31:07.799040 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Mar 25 02:31:07.799101 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 25 02:31:07.799162 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Mar 25 02:31:07.799221 kernel: ata2.00: Enabling discard_zeroes_data Mar 25 02:31:07.799229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 02:31:07.799237 kernel: GPT:9289727 != 937703087 Mar 25 02:31:07.799244 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 02:31:07.799251 kernel: GPT:9289727 != 937703087 Mar 25 02:31:07.799258 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 02:31:07.799266 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Mar 25 02:31:07.799274 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Mar 25 02:31:07.799333 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 25 02:31:07.799392 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 25 02:31:07.799451 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Mar 25 02:31:07.799524 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 25 02:31:07.799585 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Mar 25 02:31:07.799643 kernel: ata1.00: Enabling discard_zeroes_data Mar 25 02:31:07.799652 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 25 02:31:07.799716 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 25 02:31:07.799775 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Mar 25 02:31:08.206522 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Mar 25 02:31:08.207006 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Mar 25 02:31:08.207632 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by (udev-worker) (544) Mar 25 02:31:08.207680 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/sdb3 scanned by (udev-worker) (701) Mar 25 02:31:08.207735 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 25 02:31:08.207774 kernel: usbcore: registered new interface driver usbhid Mar 25 02:31:08.207810 kernel: usbhid: USB HID core driver Mar 25 02:31:08.207846 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Mar 25 02:31:08.207884 kernel: ata2.00: Enabling discard_zeroes_data Mar 25 02:31:08.207920 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Mar 25 02:31:08.207956 kernel: ata2.00: Enabling discard_zeroes_data Mar 25 02:31:08.207992 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Mar 25 02:31:08.208027 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Mar 25 02:31:08.208455 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Mar 25 02:31:08.208500 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Mar 25 02:31:08.208880 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Mar 25 02:31:08.209250 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Mar 25 02:31:08.209615 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 25 02:31:07.136707 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:31:08.228881 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Mar 25 02:31:07.174642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 02:31:07.174746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:31:08.253675 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Mar 25 02:31:07.185856 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:31:07.207147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:31:07.216869 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:31:07.233803 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 02:31:07.244105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 02:31:08.307773 disk-uuid[721]: Primary Header is updated. Mar 25 02:31:08.307773 disk-uuid[721]: Secondary Entries is updated. Mar 25 02:31:08.307773 disk-uuid[721]: Secondary Header is updated. Mar 25 02:31:07.244184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:31:07.244290 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 02:31:07.244811 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 02:31:07.285753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:31:07.296785 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 02:31:07.307147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:31:07.339404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:31:07.721148 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Mar 25 02:31:07.739343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Mar 25 02:31:07.757504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Mar 25 02:31:07.774310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Mar 25 02:31:07.785644 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Mar 25 02:31:07.797117 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 02:31:08.868040 kernel: ata2.00: Enabling discard_zeroes_data Mar 25 02:31:08.876551 disk-uuid[722]: The operation has completed successfully. Mar 25 02:31:08.884647 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Mar 25 02:31:08.911669 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 02:31:08.911719 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 02:31:08.963017 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 02:31:08.996732 sh[752]: Success Mar 25 02:31:09.011610 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 25 02:31:09.058812 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 02:31:09.070135 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 02:31:09.102957 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 02:31:09.163651 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 02:31:09.163667 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:31:09.163675 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 02:31:09.163682 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 02:31:09.163689 kernel: BTRFS info (device dm-0): using free space tree Mar 25 02:31:09.171603 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 25 02:31:09.173686 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 02:31:09.182009 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 02:31:09.182442 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 02:31:09.215443 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 02:31:09.272872 kernel: BTRFS info (device sdb6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:31:09.272907 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:31:09.278787 kernel: BTRFS info (device sdb6): using free space tree Mar 25 02:31:09.292804 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 02:31:09.327702 kernel: BTRFS info (device sdb6): enabling ssd optimizations Mar 25 02:31:09.327715 kernel: BTRFS info (device sdb6): auto enabling async discard Mar 25 02:31:09.327722 kernel: BTRFS info (device sdb6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:31:09.314947 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 02:31:09.338690 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 02:31:09.347589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 02:31:09.394899 systemd-networkd[932]: lo: Link UP Mar 25 02:31:09.394902 systemd-networkd[932]: lo: Gained carrier Mar 25 02:31:09.397517 systemd-networkd[932]: Enumeration completed Mar 25 02:31:09.397615 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 02:31:09.398285 systemd-networkd[932]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:31:09.401757 systemd[1]: Reached target network.target - Network. Mar 25 02:31:09.444399 ignition[931]: Ignition 2.20.0 Mar 25 02:31:09.427116 systemd-networkd[932]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:31:09.444403 ignition[931]: Stage: fetch-offline Mar 25 02:31:09.446771 unknown[931]: fetched base config from "system" Mar 25 02:31:09.444425 ignition[931]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:31:09.446776 unknown[931]: fetched user config from "system" Mar 25 02:31:09.444430 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Mar 25 02:31:09.447700 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 02:31:09.444485 ignition[931]: parsed url from cmdline: "" Mar 25 02:31:09.452157 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 25 02:31:09.444487 ignition[931]: no config URL provided Mar 25 02:31:09.452737 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 02:31:09.444490 ignition[931]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 02:31:09.454502 systemd-networkd[932]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:31:09.444512 ignition[931]: parsing config with SHA512: 52b5a462b2f8a3eccecd7ecb3e3c5dd6f49c773d771ae7780f12ea15153b85bb5754aebcb90ca3383ef7f84cc83d7d6fb4bcd01363fe48582d3c2b90f2893dd1 Mar 25 02:31:09.446980 ignition[931]: fetch-offline: fetch-offline passed Mar 25 02:31:09.446983 ignition[931]: POST message to Packet Timeline Mar 25 02:31:09.446985 ignition[931]: POST Status error: resource requires networking Mar 25 02:31:09.634766 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Mar 25 02:31:09.626109 systemd-networkd[932]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:31:09.447025 ignition[931]: Ignition finished successfully Mar 25 02:31:09.493376 ignition[945]: Ignition 2.20.0 Mar 25 02:31:09.493380 ignition[945]: Stage: kargs Mar 25 02:31:09.493473 ignition[945]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:31:09.493479 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Mar 25 02:31:09.494036 ignition[945]: kargs: kargs passed Mar 25 02:31:09.494039 ignition[945]: POST message to Packet Timeline Mar 25 02:31:09.494050 ignition[945]: GET https://metadata.packet.net/metadata: attempt #1 Mar 25 02:31:09.494425 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45431->[::1]:53: read: connection refused Mar 25 02:31:09.695075 ignition[945]: GET https://metadata.packet.net/metadata: attempt #2 Mar 25 02:31:09.696400 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53334->[::1]:53: read: connection refused Mar 25 02:31:09.845637 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Mar 25 02:31:09.848014 systemd-networkd[932]: eno1: Link UP Mar 25 02:31:09.848410 systemd-networkd[932]: eno2: Link UP Mar 25 02:31:09.848787 systemd-networkd[932]: enp2s0f0np0: Link UP Mar 25 02:31:09.849242 systemd-networkd[932]: enp2s0f0np0: Gained carrier Mar 25 02:31:09.861085 systemd-networkd[932]: enp2s0f1np1: Link UP Mar 25 02:31:09.896754 systemd-networkd[932]: enp2s0f0np0: DHCPv4 address 86.109.11.215/31, gateway 86.109.11.214 acquired from 145.40.83.140 Mar 25 02:31:10.096656 ignition[945]: GET https://metadata.packet.net/metadata: attempt #3 Mar 25 02:31:10.097835 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34914->[::1]:53: read: connection refused Mar 25 02:31:10.631234 systemd-networkd[932]: enp2s0f1np1: Gained carrier Mar 25 02:31:10.898260 ignition[945]: GET https://metadata.packet.net/metadata: attempt #4 Mar 25 02:31:10.899658 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34828->[::1]:53: read: connection refused Mar 25 02:31:11.207061 systemd-networkd[932]: enp2s0f0np0: Gained IPv6LL Mar 25 02:31:12.423053 systemd-networkd[932]: enp2s0f1np1: Gained IPv6LL Mar 25 02:31:12.500677 ignition[945]: GET https://metadata.packet.net/metadata: attempt #5 Mar 25 02:31:12.502195 ignition[945]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42510->[::1]:53: read: connection refused Mar 25 02:31:15.705280 ignition[945]: GET https://metadata.packet.net/metadata: attempt #6 Mar 25 02:31:16.453495 ignition[945]: GET result: OK Mar 25 02:31:16.807756 ignition[945]: Ignition finished successfully Mar 25 02:31:16.813038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 02:31:16.828959 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 02:31:16.869939 ignition[965]: Ignition 2.20.0 Mar 25 02:31:16.869946 ignition[965]: Stage: disks Mar 25 02:31:16.870089 ignition[965]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:31:16.870099 ignition[965]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Mar 25 02:31:16.870885 ignition[965]: disks: disks passed Mar 25 02:31:16.870889 ignition[965]: POST message to Packet Timeline Mar 25 02:31:16.870904 ignition[965]: GET https://metadata.packet.net/metadata: attempt #1 Mar 25 02:31:17.692733 ignition[965]: GET result: OK Mar 25 02:31:18.113090 ignition[965]: Ignition finished successfully Mar 25 02:31:18.116516 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 02:31:18.131830 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 02:31:18.149855 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 02:31:18.170861 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 02:31:18.192857 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 02:31:18.213014 systemd[1]: Reached target basic.target - Basic System. Mar 25 02:31:18.234438 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 02:31:18.281067 systemd-fsck[982]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 02:31:18.293038 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 02:31:18.305485 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 02:31:18.410576 kernel: EXT4-fs (sdb9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 02:31:18.411098 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 02:31:18.419985 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 02:31:18.427741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 02:31:18.446752 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 02:31:18.475129 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 25 02:31:18.539825 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by mount (991) Mar 25 02:31:18.539845 kernel: BTRFS info (device sdb6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:31:18.539854 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:31:18.539861 kernel: BTRFS info (device sdb6): using free space tree Mar 25 02:31:18.539868 kernel: BTRFS info (device sdb6): enabling ssd optimizations Mar 25 02:31:18.539876 kernel: BTRFS info (device sdb6): auto enabling async discard Mar 25 02:31:18.509199 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Mar 25 02:31:18.539874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 02:31:18.602676 coreos-metadata[993]: Mar 25 02:31:18.600 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Mar 25 02:31:18.539902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 02:31:18.643719 coreos-metadata[994]: Mar 25 02:31:18.600 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Mar 25 02:31:18.564760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 02:31:18.584641 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 02:31:18.611610 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 02:31:18.684674 initrd-setup-root[1023]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 02:31:18.694714 initrd-setup-root[1030]: cut: /sysroot/etc/group: No such file or directory Mar 25 02:31:18.705665 initrd-setup-root[1037]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 02:31:18.715678 initrd-setup-root[1044]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 02:31:18.727389 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 02:31:18.728161 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 02:31:18.754506 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 02:31:18.784770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 02:31:18.801693 kernel: BTRFS info (device sdb6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:31:18.818277 ignition[1111]: INFO : Ignition 2.20.0 Mar 25 02:31:18.818277 ignition[1111]: INFO : Stage: mount Mar 25 02:31:18.842779 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:31:18.842779 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Mar 25 02:31:18.842779 ignition[1111]: INFO : mount: mount passed Mar 25 02:31:18.842779 ignition[1111]: INFO : POST message to Packet Timeline Mar 25 02:31:18.842779 ignition[1111]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Mar 25 02:31:18.819988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 02:31:19.355222 coreos-metadata[994]: Mar 25 02:31:19.355 INFO Fetch successful Mar 25 02:31:19.431998 systemd[1]: flatcar-static-network.service: Deactivated successfully. Mar 25 02:31:19.432079 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Mar 25 02:31:19.522964 coreos-metadata[993]: Mar 25 02:31:19.522 INFO Fetch successful Mar 25 02:31:19.579877 coreos-metadata[993]: Mar 25 02:31:19.579 INFO wrote hostname ci-4284.0.0-a-336c6dbb20 to /sysroot/etc/hostname Mar 25 02:31:19.581483 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 02:31:19.719661 ignition[1111]: INFO : GET result: OK Mar 25 02:31:20.087848 ignition[1111]: INFO : Ignition finished successfully Mar 25 02:31:20.091434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 02:31:20.110381 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 02:31:20.149667 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 02:31:20.191613 kernel: BTRFS: device label OEM devid 1 transid 20 /dev/sdb6 scanned by mount (1133) Mar 25 02:31:20.209127 kernel: BTRFS info (device sdb6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:31:20.209143 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:31:20.215016 kernel: BTRFS info (device sdb6): using free space tree Mar 25 02:31:20.229944 kernel: BTRFS info (device sdb6): enabling ssd optimizations Mar 25 02:31:20.229961 kernel: BTRFS info (device sdb6): auto enabling async discard Mar 25 02:31:20.231809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 02:31:20.264745 ignition[1150]: INFO : Ignition 2.20.0 Mar 25 02:31:20.264745 ignition[1150]: INFO : Stage: files Mar 25 02:31:20.278790 ignition[1150]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:31:20.278790 ignition[1150]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Mar 25 02:31:20.278790 ignition[1150]: DEBUG : files: compiled without relabeling support, skipping Mar 25 02:31:20.278790 ignition[1150]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 02:31:20.278790 ignition[1150]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 02:31:20.278790 ignition[1150]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 02:31:20.278790 ignition[1150]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 02:31:20.278790 ignition[1150]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 02:31:20.278790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 02:31:20.278790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 02:31:20.268530 unknown[1150]: wrote ssh authorized keys file for user: core Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:31:20.409790 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:31:20.659889 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 25 02:31:21.056411 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 25 02:31:21.899387 ignition[1150]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:31:21.899387 ignition[1150]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 02:31:21.928798 ignition[1150]: INFO : files: files passed Mar 25 02:31:21.928798 ignition[1150]: INFO : POST message to Packet Timeline Mar 25 02:31:21.928798 ignition[1150]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Mar 25 02:31:22.894850 ignition[1150]: INFO : GET result: OK Mar 25 02:31:23.273234 ignition[1150]: INFO : Ignition finished successfully Mar 25 02:31:23.276004 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 02:31:23.296965 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 02:31:23.312172 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 02:31:23.336156 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 02:31:23.336231 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 02:31:23.362102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 02:31:23.383904 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 02:31:23.406073 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 02:31:23.424847 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:31:23.424847 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:31:23.450789 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:31:23.511204 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 02:31:23.511279 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 02:31:23.530122 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 02:31:23.550885 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 02:31:23.571022 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 02:31:23.573515 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 02:31:23.657606 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 02:31:23.661713 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 02:31:23.724146 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:31:23.736187 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:31:23.757283 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 02:31:23.776370 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 02:31:23.776837 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 02:31:23.805437 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 02:31:23.827188 systemd[1]: Stopped target basic.target - Basic System. Mar 25 02:31:23.845320 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 02:31:23.865161 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 02:31:23.886195 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 02:31:23.907195 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 02:31:23.927186 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 02:31:23.948222 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 02:31:23.970212 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 02:31:23.990186 systemd[1]: Stopped target swap.target - Swaps. Mar 25 02:31:24.009210 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 02:31:24.009662 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 02:31:24.036284 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:31:24.056202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:31:24.077186 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 02:31:24.077522 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:31:24.100076 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 02:31:24.100489 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 02:31:24.132180 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 02:31:24.132697 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 02:31:24.152387 systemd[1]: Stopped target paths.target - Path Units. Mar 25 02:31:24.172040 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 02:31:24.177689 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:31:24.195174 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 02:31:24.213173 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 02:31:24.231168 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 02:31:24.231469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 02:31:24.251215 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 02:31:24.251499 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 02:31:24.274438 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 02:31:24.274905 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 02:31:24.419836 ignition[1212]: INFO : Ignition 2.20.0 Mar 25 02:31:24.419836 ignition[1212]: INFO : Stage: umount Mar 25 02:31:24.419836 ignition[1212]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:31:24.419836 ignition[1212]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Mar 25 02:31:24.419836 ignition[1212]: INFO : umount: umount passed Mar 25 02:31:24.419836 ignition[1212]: INFO : POST message to Packet Timeline Mar 25 02:31:24.419836 ignition[1212]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Mar 25 02:31:24.294286 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 02:31:24.294715 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 02:31:24.313267 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 25 02:31:24.313704 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 25 02:31:24.335671 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 02:31:24.341727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 02:31:24.341822 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:31:24.371475 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 02:31:24.380858 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 02:31:24.381005 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:31:24.408961 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 02:31:24.409170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 02:31:24.453142 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 02:31:24.455822 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 02:31:24.456073 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 02:31:24.484190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 02:31:24.484435 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 02:31:25.796713 ignition[1212]: INFO : GET result: OK Mar 25 02:31:26.158418 ignition[1212]: INFO : Ignition finished successfully Mar 25 02:31:26.162165 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 02:31:26.162443 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 02:31:26.178402 systemd[1]: Stopped target network.target - Network. Mar 25 02:31:26.193811 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 02:31:26.194063 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 02:31:26.211977 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 02:31:26.212123 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 02:31:26.230069 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 02:31:26.230238 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 02:31:26.238229 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 02:31:26.238393 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 02:31:26.265049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 02:31:26.265227 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 02:31:26.273480 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 02:31:26.301214 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 02:31:26.321617 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 02:31:26.321882 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 02:31:26.343258 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 02:31:26.343374 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 02:31:26.343428 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 02:31:26.359734 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 02:31:26.360395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 02:31:26.360423 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:31:26.381519 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 02:31:26.400695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 02:31:26.400765 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 02:31:26.421084 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 02:31:26.421257 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:31:26.441237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 02:31:26.441403 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 02:31:26.461033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 02:31:26.461204 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:31:26.481296 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:31:26.505514 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 02:31:26.505739 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:31:26.506954 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 02:31:26.507308 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:31:26.527600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 02:31:26.527747 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 02:31:26.542806 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 02:31:26.542827 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:31:26.552991 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 02:31:26.553043 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 02:31:26.592088 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 02:31:26.592224 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 02:31:26.622123 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 02:31:26.622382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:31:26.656381 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 02:31:26.672766 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 02:31:26.672988 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:31:26.701111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 02:31:26.988797 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Mar 25 02:31:26.701259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:31:26.724205 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 02:31:26.724378 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:31:26.725462 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 02:31:26.725721 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 02:31:26.832885 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 02:31:26.833171 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 02:31:26.853053 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 02:31:26.874014 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 02:31:26.928943 systemd[1]: Switching root. Mar 25 02:31:27.086603 systemd-journald[268]: Journal stopped Mar 25 02:31:28.793520 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 02:31:28.793535 kernel: SELinux: policy capability open_perms=1 Mar 25 02:31:28.793543 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 02:31:28.793550 kernel: SELinux: policy capability always_check_network=0 Mar 25 02:31:28.793555 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 02:31:28.793564 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 02:31:28.793570 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 02:31:28.793576 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 02:31:28.793582 kernel: audit: type=1403 audit(1742869887.198:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 02:31:28.793589 systemd[1]: Successfully loaded SELinux policy in 72.714ms. Mar 25 02:31:28.793598 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.308ms. Mar 25 02:31:28.793605 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 02:31:28.793612 systemd[1]: Detected architecture x86-64. Mar 25 02:31:28.793618 systemd[1]: Detected first boot. Mar 25 02:31:28.793625 systemd[1]: Hostname set to . Mar 25 02:31:28.793633 systemd[1]: Initializing machine ID from random generator. Mar 25 02:31:28.793640 zram_generator::config[1268]: No configuration found. Mar 25 02:31:28.793647 systemd[1]: Populated /etc with preset unit settings. Mar 25 02:31:28.793654 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 02:31:28.793661 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 02:31:28.793667 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 02:31:28.793674 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 02:31:28.793682 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 02:31:28.793688 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 02:31:28.793695 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 02:31:28.793702 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 02:31:28.793709 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 02:31:28.793718 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 02:31:28.793725 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 02:31:28.793733 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 02:31:28.793739 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:31:28.793746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:31:28.793753 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 02:31:28.793760 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 02:31:28.793767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 02:31:28.793773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 02:31:28.793780 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Mar 25 02:31:28.793788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:31:28.793794 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 02:31:28.793801 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 02:31:28.793810 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 02:31:28.793817 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 02:31:28.793824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:31:28.793831 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 02:31:28.793839 systemd[1]: Reached target slices.target - Slice Units. Mar 25 02:31:28.793846 systemd[1]: Reached target swap.target - Swaps. Mar 25 02:31:28.793853 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 02:31:28.793860 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 02:31:28.793866 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 02:31:28.793873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:31:28.793882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 02:31:28.793889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:31:28.793896 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 02:31:28.793903 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 02:31:28.793910 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 02:31:28.793917 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 02:31:28.793924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:31:28.793932 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 02:31:28.793939 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 02:31:28.793946 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 02:31:28.793953 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 02:31:28.793961 systemd[1]: Reached target machines.target - Containers. Mar 25 02:31:28.793968 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 02:31:28.793975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:31:28.793982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 02:31:28.793989 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 02:31:28.793997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 02:31:28.794004 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 02:31:28.794011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 02:31:28.794019 kernel: ACPI: bus type drm_connector registered Mar 25 02:31:28.794026 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 02:31:28.794033 kernel: fuse: init (API version 7.39) Mar 25 02:31:28.794039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 02:31:28.794046 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 02:31:28.794055 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 02:31:28.794062 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 02:31:28.794069 kernel: loop: module loaded Mar 25 02:31:28.794075 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 02:31:28.794082 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 02:31:28.794090 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:31:28.794097 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 02:31:28.794104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 02:31:28.794120 systemd-journald[1372]: Collecting audit messages is disabled. Mar 25 02:31:28.794136 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 02:31:28.794144 systemd-journald[1372]: Journal started Mar 25 02:31:28.794160 systemd-journald[1372]: Runtime Journal (/run/log/journal/84cc07d687e74c758f04d02160508073) is 8M, max 636.5M, 628.5M free. Mar 25 02:31:27.645061 systemd[1]: Queued start job for default target multi-user.target. Mar 25 02:31:27.655503 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Mar 25 02:31:27.655735 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 02:31:28.825661 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 02:31:28.847610 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 02:31:28.858629 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 02:31:28.889836 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 02:31:28.889888 systemd[1]: Stopped verity-setup.service. Mar 25 02:31:28.915612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:31:28.924650 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 02:31:28.934032 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 02:31:28.943740 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 02:31:28.953712 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 02:31:28.964843 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 02:31:28.975841 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 02:31:28.985838 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 02:31:28.995932 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 02:31:29.007915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:31:29.019913 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 02:31:29.020003 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 02:31:29.031909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 02:31:29.031995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 02:31:29.043902 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 02:31:29.043991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 02:31:29.053900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 02:31:29.053985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 02:31:29.065898 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 02:31:29.065981 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 02:31:29.075897 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 02:31:29.075980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 02:31:29.085909 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 02:31:29.095905 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 02:31:29.106932 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 02:31:29.117993 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 02:31:29.129222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:31:29.149756 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 02:31:29.161536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 02:31:29.185116 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 02:31:29.195749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 02:31:29.195786 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 02:31:29.208497 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 02:31:29.222492 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 02:31:29.243069 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 02:31:29.252811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:31:29.253947 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 02:31:29.264161 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 02:31:29.274675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 02:31:29.275344 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 02:31:29.284123 systemd-journald[1372]: Time spent on flushing to /var/log/journal/84cc07d687e74c758f04d02160508073 is 12.898ms for 1401 entries. Mar 25 02:31:29.284123 systemd-journald[1372]: System Journal (/var/log/journal/84cc07d687e74c758f04d02160508073) is 8M, max 195.6M, 187.6M free. Mar 25 02:31:29.308217 systemd-journald[1372]: Received client request to flush runtime journal. Mar 25 02:31:29.292069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 02:31:29.292778 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:31:29.304327 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 02:31:29.316459 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 02:31:29.328616 kernel: loop0: detected capacity change from 0 to 151640 Mar 25 02:31:29.333291 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 02:31:29.346052 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 02:31:29.356565 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 02:31:29.362798 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 02:31:29.373808 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 02:31:29.384803 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 02:31:29.394566 kernel: loop1: detected capacity change from 0 to 205544 Mar 25 02:31:29.401791 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 02:31:29.412789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:31:29.422930 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 02:31:29.435773 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 02:31:29.450566 kernel: loop2: detected capacity change from 0 to 8 Mar 25 02:31:29.451046 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 02:31:29.470990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 02:31:29.481983 udevadm[1413]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 02:31:29.483601 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 02:31:29.487790 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 02:31:29.498567 kernel: loop3: detected capacity change from 0 to 109808 Mar 25 02:31:29.501040 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Mar 25 02:31:29.501050 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Mar 25 02:31:29.505362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:31:29.566568 kernel: loop4: detected capacity change from 0 to 151640 Mar 25 02:31:29.589565 kernel: loop5: detected capacity change from 0 to 205544 Mar 25 02:31:29.612567 kernel: loop6: detected capacity change from 0 to 8 Mar 25 02:31:29.619582 kernel: loop7: detected capacity change from 0 to 109808 Mar 25 02:31:29.630073 (sd-merge)[1432]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Mar 25 02:31:29.630319 (sd-merge)[1432]: Merged extensions into '/usr'. Mar 25 02:31:29.631726 ldconfig[1403]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 02:31:29.632813 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 02:31:29.643045 systemd[1]: Reload requested from client PID 1408 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 02:31:29.643054 systemd[1]: Reloading... Mar 25 02:31:29.675689 zram_generator::config[1460]: No configuration found. Mar 25 02:31:29.749503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:31:29.801884 systemd[1]: Reloading finished in 158 ms. Mar 25 02:31:29.818377 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 02:31:29.829946 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 02:31:29.852382 systemd[1]: Starting ensure-sysext.service... Mar 25 02:31:29.860410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 02:31:29.872452 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:31:29.879935 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 02:31:29.880088 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 02:31:29.880550 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 02:31:29.880755 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Mar 25 02:31:29.880791 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Mar 25 02:31:29.882850 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 02:31:29.882854 systemd-tmpfiles[1517]: Skipping /boot Mar 25 02:31:29.887606 systemd[1]: Reload requested from client PID 1516 ('systemctl') (unit ensure-sysext.service)... Mar 25 02:31:29.887613 systemd[1]: Reloading... Mar 25 02:31:29.888155 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 02:31:29.888160 systemd-tmpfiles[1517]: Skipping /boot Mar 25 02:31:29.899411 systemd-udevd[1518]: Using default interface naming scheme 'v255'. Mar 25 02:31:29.919570 zram_generator::config[1547]: No configuration found. Mar 25 02:31:29.954572 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Mar 25 02:31:29.954634 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (1600) Mar 25 02:31:29.954647 kernel: ACPI: button: Sleep Button [SLPB] Mar 25 02:31:29.977582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 25 02:31:29.984586 kernel: IPMI message handler: version 39.2 Mar 25 02:31:29.984649 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 02:31:29.990567 kernel: ACPI: button: Power Button [PWRF] Mar 25 02:31:30.015420 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Mar 25 02:31:30.021013 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Mar 25 02:31:30.021158 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Mar 25 02:31:30.021285 kernel: ipmi device interface Mar 25 02:31:30.038571 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Mar 25 02:31:30.039078 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Mar 25 02:31:30.041803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:31:30.087642 kernel: iTCO_vendor_support: vendor-support=0 Mar 25 02:31:30.087705 kernel: ACPI: video: Video Device [GFX0] (multi-head: yes rom: no post: no) Mar 25 02:31:30.104657 kernel: input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input4 Mar 25 02:31:30.104725 kernel: ipmi_si: IPMI System Interface driver Mar 25 02:31:30.115403 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Mar 25 02:31:30.131280 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Mar 25 02:31:30.131293 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Mar 25 02:31:30.131303 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Mar 25 02:31:30.162814 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Mar 25 02:31:30.162910 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Mar 25 02:31:30.162983 kernel: ipmi_si: Adding ACPI-specified kcs state machine Mar 25 02:31:30.162993 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Mar 25 02:31:30.130414 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Mar 25 02:31:30.130604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Mar 25 02:31:30.187766 systemd[1]: Reloading finished in 299 ms. Mar 25 02:31:30.188566 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Mar 25 02:31:30.208715 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:31:30.210569 kernel: intel_rapl_common: Found RAPL domain package Mar 25 02:31:30.210604 kernel: intel_rapl_common: Found RAPL domain core Mar 25 02:31:30.210617 kernel: intel_rapl_common: Found RAPL domain uncore Mar 25 02:31:30.210628 kernel: intel_rapl_common: Found RAPL domain dram Mar 25 02:31:30.270731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:31:30.292566 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Mar 25 02:31:30.303057 systemd[1]: Finished ensure-sysext.service. Mar 25 02:31:30.329801 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Mar 25 02:31:30.333565 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Mar 25 02:31:30.343620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:31:30.344299 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 02:31:30.406599 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Mar 25 02:31:30.529566 kernel: ipmi_ssif: IPMI SSIF Interface driver Mar 25 02:31:30.529597 kernel: i915 0000:00:02.0: can't derive routing for PCI INT A Mar 25 02:31:31.453308 kernel: i915 0000:00:02.0: PCI INT A: not connected Mar 25 02:31:31.453416 kernel: i915 0000:00:02.0: [drm] VT-d active for gfx access Mar 25 02:31:31.453504 kernel: i915 0000:00:02.0: [drm] Using Transparent Hugepages Mar 25 02:31:31.453589 kernel: i915 0000:00:02.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment) Mar 25 02:31:31.453672 kernel: i915 0000:00:02.0: [drm] Failed to find VBIOS tables (VBT) Mar 25 02:31:31.453753 kernel: i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4) Mar 25 02:31:31.453829 kernel: i915 0000:00:02.0: [drm] [ENCODER:94:DDI A/PHY A] failed to retrieve link info, disabling eDP Mar 25 02:31:31.453907 kernel: ------------[ cut here ]------------ Mar 25 02:31:31.453918 kernel: i915 0000:00:02.0: Platform does not support port F Mar 25 02:31:31.453927 kernel: WARNING: CPU: 9 PID: 1675 at drivers/gpu/drm/i915/display/intel_display.c:7444 assert_port_valid+0x58/0x70 [i915] Mar 25 02:31:31.453938 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Mar 25 02:31:31.454023 kernel: Modules linked in: bonding ipmi_ssif i915(+) intel_rapl_msr coretemp intel_rapl_common x86_pkg_temp_thermal drm_buddy mlx5_ib ttm intel_gtt iTCO_wdt watchdog acpi_ipmi ib_uverbs kvm_intel drm_display_helper ipmi_si iTCO_vendor_support wmi_bmof video kvm mei_me ipmi_devintf ib_core irqbypass i2c_i801 drm_kms_helper i2c_smbus mei evdev mousedev fan ipmi_msghandler wmi button squashfs sch_fq_codel loop fuse drm backlight configfs nfnetlink dmi_sysfs nls_ascii nls_cp437 vfat fat ext4 crc16 mbcache jbd2 dm_verity dm_bufio hid_generic usbhid hid sd_mod t10_pi crc64_rocksoft_generic crc64_rocksoft crc_t10dif crct10dif_generic crc64 crct10dif_common sha256_ssse3 mlx5_core sha1_ssse3 igb xhci_pci xhci_hcd ahci aesni_intel i2c_algo_bit libahci hwmon libaes crypto_simd ptp usbcore libata cryptd i2c_core pps_core pci_hyperv_intf usb_common btrfs blake2b_generic xor lzo_compress zstd_compress raid6_pq libcrc32c crc32c_generic crc32c_intel dm_mirror dm_region_hash dm_log dm_mod qla4xxx iscsi_boot_sysfs iscsi_tcp Mar 25 02:31:31.454065 kernel: libiscsi_tcp Mar 25 02:31:31.454077 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Mar 25 02:31:31.454086 kernel: libiscsi scsi_transport_iscsi scsi_mod scsi_common br_netfilter bridge stp llc overlay Mar 25 02:31:31.454095 kernel: CPU: 9 PID: 1675 Comm: (udev-worker) Not tainted 6.6.83-flatcar #1 Mar 25 02:31:31.454103 kernel: Hardware name: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Mar 25 02:31:31.454111 kernel: RIP: 0010:assert_port_valid+0x58/0x70 [i915] Mar 25 02:31:31.454122 kernel: Code: 7f 08 8d 6e 41 4c 8b 67 50 4d 85 e4 75 03 4c 8b 27 e8 dc 1e 12 e0 89 e9 4c 89 e2 48 c7 c7 70 a7 80 c1 48 89 c6 e8 b8 f0 a8 df <0f> 0b 89 d8 5b 5d 83 e0 01 41 5c c3 cc cc cc cc 0f 1f 84 00 00 00 Mar 25 02:31:31.454134 kernel: RSP: 0018:ffffa85fc1813ac0 EFLAGS: 00010282 Mar 25 02:31:31.454142 kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 Mar 25 02:31:31.454152 kernel: RDX: ffff8fc8a3669b00 RSI: ffff8fc8a365d6c0 RDI: ffff8fc8a365d6c0 Mar 25 02:31:31.454160 kernel: RBP: 0000000000000046 R08: 0000000000000000 R09: ffffa85fc1813948 Mar 25 02:31:31.454170 kernel: R10: ffffffffa3918ba8 R11: 0000000000000003 R12: ffff8fc14143f4f0 Mar 25 02:31:31.454179 kernel: R13: ffff8fc15a421aa8 R14: ffff8fc1413dc0c0 R15: ffff8fc1413dc000 Mar 25 02:31:31.454188 kernel: FS: 00007fd30a0ce480(0000) GS:ffff8fc8a3640000(0000) knlGS:0000000000000000 Mar 25 02:31:31.454198 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 25 02:31:31.454206 kernel: CR2: 00007fd30aa431c0 CR3: 000000010d7a4005 CR4: 00000000003706e0 Mar 25 02:31:31.454215 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 25 02:31:31.454223 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Mar 25 02:31:31.454231 kernel: Call Trace: Mar 25 02:31:31.454239 kernel: Mar 25 02:31:31.454247 kernel: ? assert_port_valid+0x58/0x70 [i915] Mar 25 02:31:31.454256 kernel: ? __warn+0x81/0x140 Mar 25 02:31:31.454267 kernel: ? assert_port_valid+0x58/0x70 [i915] Mar 25 02:31:31.454278 kernel: ? report_bug+0x16f/0x1a0 Mar 25 02:31:31.454287 kernel: ? console_unlock+0x76/0x120 Mar 25 02:31:31.454296 kernel: ? handle_bug+0x58/0x90 Mar 25 02:31:31.454304 kernel: ? exc_invalid_op+0x17/0x70 Mar 25 02:31:31.454313 kernel: ? asm_exc_invalid_op+0x1a/0x20 Mar 25 02:31:31.454321 kernel: ? assert_port_valid+0x58/0x70 [i915] Mar 25 02:31:31.454329 kernel: intel_ddi_init+0x65/0xfe0 [i915] Mar 25 02:31:31.454337 kernel: ? __pfx_intel_ddi_init+0x10/0x10 [i915] Mar 25 02:31:31.454347 kernel: intel_bios_for_each_encoder+0x35/0x60 [i915] Mar 25 02:31:31.454355 kernel: intel_setup_outputs+0x36b/0x8b0 [i915] Mar 25 02:31:31.454363 kernel: intel_display_driver_probe_nogem+0x14c/0x210 [i915] Mar 25 02:31:31.454372 kernel: i915_driver_probe+0x6a2/0xb50 [i915] Mar 25 02:31:31.454380 kernel: local_pci_probe+0x42/0xa0 Mar 25 02:31:31.454388 kernel: pci_device_probe+0xbd/0x270 Mar 25 02:31:31.454396 kernel: ? sysfs_do_create_link_sd+0x6e/0xe0 Mar 25 02:31:31.454407 kernel: really_probe+0x19b/0x3e0 Mar 25 02:31:31.454415 kernel: ? __pfx___driver_attach+0x10/0x10 Mar 25 02:31:31.454426 kernel: __driver_probe_device+0x78/0x160 Mar 25 02:31:31.454436 kernel: driver_probe_device+0x1f/0xa0 Mar 25 02:31:31.454445 kernel: __driver_attach+0xba/0x1c0 Mar 25 02:31:31.454453 kernel: bus_for_each_dev+0x7b/0xd0 Mar 25 02:31:31.454462 kernel: bus_add_driver+0x112/0x240 Mar 25 02:31:31.454471 kernel: driver_register+0x5c/0x100 Mar 25 02:31:31.454480 kernel: i915_init+0x22/0xc0 [i915] Mar 25 02:31:31.454488 kernel: ? __pfx_i915_init+0x10/0x10 [i915] Mar 25 02:31:31.454497 kernel: do_one_initcall+0x45/0x310 Mar 25 02:31:31.454508 kernel: ? kmalloc_trace+0x2a/0xa0 Mar 25 02:31:31.454516 kernel: do_init_module+0x60/0x240 Mar 25 02:31:31.454524 kernel: __do_sys_init_module+0x17a/0x1b0 Mar 25 02:31:31.454535 kernel: do_syscall_64+0x39/0x90 Mar 25 02:31:31.454543 kernel: entry_SYSCALL_64_after_hwframe+0x78/0xe2 Mar 25 02:31:31.454551 kernel: RIP: 0033:0x7fd30aa9ac9e Mar 25 02:31:31.454567 kernel: Code: 48 8b 0d 75 61 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 42 61 0d 00 f7 d8 64 89 01 48 Mar 25 02:31:31.454577 kernel: RSP: 002b:00007ffd05f9b148 EFLAGS: 00000246 ORIG_RAX: 00000000000000af Mar 25 02:31:31.454586 kernel: RAX: ffffffffffffffda RBX: 00005585126c87d0 RCX: 00007fd30aa9ac9e Mar 25 02:31:31.454597 kernel: RDX: 00007fd309dee31d RSI: 000000000082ddb2 RDI: 00007fd308025010 Mar 25 02:31:31.454607 kernel: RBP: 00007fd308025010 R08: 0000000000000007 R09: 0000000000000006 Mar 25 02:31:31.454615 kernel: R10: 0000000000000071 R11: 0000000000000246 R12: 00007fd309dee31d Mar 25 02:31:31.454623 kernel: R13: 0000000000020000 R14: 0000558512700ce0 R15: 0000000000000000 Mar 25 02:31:31.454632 kernel: Mar 25 02:31:31.454640 kernel: ---[ end trace 0000000000000000 ]--- Mar 25 02:31:31.454648 kernel: [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.0 on minor 0 Mar 25 02:31:30.541069 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 02:31:30.587856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:31:30.596698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 02:31:31.454909 augenrules[1727]: No rules Mar 25 02:31:30.613832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 02:31:30.624145 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 02:31:30.635122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 02:31:30.644680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:31:30.645189 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 02:31:30.655594 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:31:30.656181 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 02:31:30.667540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 02:31:30.668643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 02:31:30.678626 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 25 02:31:30.697167 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 02:31:30.715808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:31:30.725598 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:31:30.726105 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 02:31:30.726221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 02:31:30.736867 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 02:31:30.737110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 02:31:30.737206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 02:31:30.737363 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 02:31:30.737456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 02:31:30.737616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 02:31:30.737707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 02:31:30.737865 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 02:31:30.737956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 02:31:30.738121 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 02:31:30.738359 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 02:31:30.740329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 02:31:30.740395 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 02:31:30.741130 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 02:31:30.742062 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 02:31:30.742084 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 02:31:30.742288 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 02:31:30.759305 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 02:31:30.778081 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 02:31:30.817099 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 25 02:31:30.820646 systemd-resolved[1743]: Positive Trust Anchors: Mar 25 02:31:30.820656 systemd-resolved[1743]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 02:31:30.820688 systemd-resolved[1743]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 02:31:30.823513 systemd-resolved[1743]: Using system hostname 'ci-4284.0.0-a-336c6dbb20'. Mar 25 02:31:30.827094 systemd-networkd[1742]: lo: Link UP Mar 25 02:31:30.827096 systemd-networkd[1742]: lo: Gained carrier Mar 25 02:31:30.830174 systemd-networkd[1742]: bond0: netdev ready Mar 25 02:31:30.831312 systemd-networkd[1742]: Enumeration completed Mar 25 02:31:30.832278 systemd-networkd[1742]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:99:04.network. Mar 25 02:31:30.853042 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 02:31:30.862629 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 02:31:30.872789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:31:30.885342 systemd[1]: Reached target network.target - Network. Mar 25 02:31:30.894596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:31:30.906597 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 02:31:30.917330 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 02:31:30.930183 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 02:31:31.066295 systemd-networkd[1742]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:99:05.network. Mar 25 02:31:31.486607 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Mar 25 02:31:31.498861 systemd-networkd[1742]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Mar 25 02:31:31.499603 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Mar 25 02:31:31.500897 systemd-networkd[1742]: enp2s0f0np0: Link UP Mar 25 02:31:31.501065 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 02:31:31.501067 systemd-networkd[1742]: enp2s0f0np0: Gained carrier Mar 25 02:31:31.510614 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Mar 25 02:31:31.525744 systemd-networkd[1742]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:99:04.network. Mar 25 02:31:31.525870 systemd-networkd[1742]: enp2s0f1np1: Link UP Mar 25 02:31:31.525986 systemd-networkd[1742]: enp2s0f1np1: Gained carrier Mar 25 02:31:31.536674 systemd-networkd[1742]: bond0: Link UP Mar 25 02:31:31.536822 systemd-networkd[1742]: bond0: Gained carrier Mar 25 02:31:31.536954 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:31.537244 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:31.537396 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:31.537448 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:31.619901 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Mar 25 02:31:31.619925 kernel: bond0: active interface up! Mar 25 02:31:31.642612 kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes Mar 25 02:31:31.649749 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 02:31:31.661570 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 02:31:31.688726 lvm[1783]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 02:31:31.723038 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 02:31:31.739566 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Mar 25 02:31:31.743964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:31:31.753707 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 02:31:31.763704 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 02:31:31.774658 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 02:31:31.785728 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 02:31:31.795706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 02:31:31.807654 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 02:31:31.818629 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 02:31:31.818649 systemd[1]: Reached target paths.target - Path Units. Mar 25 02:31:31.831563 kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes Mar 25 02:31:31.833637 systemd[1]: Reached target timers.target - Timer Units. Mar 25 02:31:31.843115 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 02:31:31.854319 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 02:31:31.864638 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 02:31:31.885897 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 02:31:31.896010 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 02:31:31.907279 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 02:31:31.918944 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 02:31:31.928757 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 02:31:31.929287 lvm[1787]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 02:31:31.938661 systemd[1]: Reached target basic.target - Basic System. Mar 25 02:31:31.946687 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 02:31:31.946704 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 02:31:31.947271 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 02:31:31.969109 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 02:31:31.987953 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 02:31:32.000800 coreos-metadata[1789]: Mar 25 02:31:32.000 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Mar 25 02:31:32.003939 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 02:31:32.014353 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 02:31:32.016023 jq[1794]: false Mar 25 02:31:32.016083 dbus-daemon[1790]: [system] SELinux support is enabled Mar 25 02:31:32.020629 kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes Mar 25 02:31:32.029683 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 02:31:32.030299 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 02:31:32.037821 extend-filesystems[1795]: Found loop4 Mar 25 02:31:32.037821 extend-filesystems[1795]: Found loop5 Mar 25 02:31:32.072785 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Mar 25 02:31:32.072802 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (1595) Mar 25 02:31:32.072812 extend-filesystems[1795]: Found loop6 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found loop7 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sda Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb1 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb2 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb3 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found usr Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb4 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb6 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb7 Mar 25 02:31:32.072812 extend-filesystems[1795]: Found sdb9 Mar 25 02:31:32.072812 extend-filesystems[1795]: Checking size of /dev/sdb9 Mar 25 02:31:32.072812 extend-filesystems[1795]: Resized partition /dev/sdb9 Mar 25 02:31:32.040249 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 02:31:32.222729 extend-filesystems[1805]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 02:31:32.063451 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 02:31:32.080068 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 02:31:32.090235 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 02:31:32.110745 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Mar 25 02:31:32.139734 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 02:31:32.246988 sshd_keygen[1819]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 02:31:32.140083 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 02:31:32.247108 update_engine[1820]: I20250325 02:31:32.188683 1820 main.cc:92] Flatcar Update Engine starting Mar 25 02:31:32.247108 update_engine[1820]: I20250325 02:31:32.189455 1820 update_check_scheduler.cc:74] Next update check in 4m57s Mar 25 02:31:32.165302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 02:31:32.247273 jq[1821]: true Mar 25 02:31:32.166314 systemd-logind[1815]: New seat seat0. Mar 25 02:31:32.167446 systemd-logind[1815]: Watching system buttons on /dev/input/event3 (Power Button) Mar 25 02:31:32.167987 systemd-logind[1815]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 25 02:31:32.168005 systemd-logind[1815]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Mar 25 02:31:32.187030 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 02:31:32.216120 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 02:31:32.238812 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 02:31:32.258256 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 02:31:32.258369 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 02:31:32.258518 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 02:31:32.258626 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 02:31:32.276698 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 02:31:32.276803 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 02:31:32.287884 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 02:31:32.313876 (ntainerd)[1835]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 02:31:32.315554 jq[1834]: true Mar 25 02:31:32.318732 dbus-daemon[1790]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 02:31:32.321294 tar[1832]: linux-amd64/helm Mar 25 02:31:32.323546 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Mar 25 02:31:32.323663 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Mar 25 02:31:32.330745 systemd[1]: Started update-engine.service - Update Engine. Mar 25 02:31:32.346130 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 02:31:32.363074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 02:31:32.363193 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 02:31:32.374676 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 02:31:32.374756 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 02:31:32.383482 bash[1864]: Updated "/home/core/.ssh/authorized_keys" Mar 25 02:31:32.386375 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 02:31:32.413907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 02:31:32.425000 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 02:31:32.425144 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 02:31:32.436379 locksmithd[1871]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 02:31:32.436638 systemd[1]: Starting sshkeys.service... Mar 25 02:31:32.451925 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 02:31:32.465032 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 02:31:32.476439 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 02:31:32.488010 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 02:31:32.494862 containerd[1835]: time="2025-03-25T02:31:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 02:31:32.497670 containerd[1835]: time="2025-03-25T02:31:32.495487471Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 02:31:32.500649 containerd[1835]: time="2025-03-25T02:31:32.500605141Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="4.054µs" Mar 25 02:31:32.500649 containerd[1835]: time="2025-03-25T02:31:32.500621076Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 02:31:32.500649 containerd[1835]: time="2025-03-25T02:31:32.500634328Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 02:31:32.500721 containerd[1835]: time="2025-03-25T02:31:32.500712451Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 02:31:32.500747 containerd[1835]: time="2025-03-25T02:31:32.500723328Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 02:31:32.500747 containerd[1835]: time="2025-03-25T02:31:32.500740750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 02:31:32.500783 containerd[1835]: time="2025-03-25T02:31:32.500774706Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 02:31:32.500804 containerd[1835]: time="2025-03-25T02:31:32.500782789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 02:31:32.500914 containerd[1835]: time="2025-03-25T02:31:32.500902354Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 02:31:32.500936 containerd[1835]: time="2025-03-25T02:31:32.500913894Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 02:31:32.500936 containerd[1835]: time="2025-03-25T02:31:32.500920459Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 02:31:32.500936 containerd[1835]: time="2025-03-25T02:31:32.500925352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 02:31:32.500990 containerd[1835]: time="2025-03-25T02:31:32.500967058Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 02:31:32.501304 containerd[1835]: time="2025-03-25T02:31:32.501286900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 02:31:32.501344 containerd[1835]: time="2025-03-25T02:31:32.501314548Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 02:31:32.501344 containerd[1835]: time="2025-03-25T02:31:32.501326687Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 02:31:32.501380 containerd[1835]: time="2025-03-25T02:31:32.501347915Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 02:31:32.501591 containerd[1835]: time="2025-03-25T02:31:32.501581440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 02:31:32.501666 containerd[1835]: time="2025-03-25T02:31:32.501618159Z" level=info msg="metadata content store policy set" policy=shared Mar 25 02:31:32.503447 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 02:31:32.508659 coreos-metadata[1885]: Mar 25 02:31:32.508 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Mar 25 02:31:32.511728 containerd[1835]: time="2025-03-25T02:31:32.511710072Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 02:31:32.511776 containerd[1835]: time="2025-03-25T02:31:32.511739608Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 02:31:32.511776 containerd[1835]: time="2025-03-25T02:31:32.511749656Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 02:31:32.511776 containerd[1835]: time="2025-03-25T02:31:32.511761783Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 02:31:32.511776 containerd[1835]: time="2025-03-25T02:31:32.511770003Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 02:31:32.511871 containerd[1835]: time="2025-03-25T02:31:32.511777350Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 02:31:32.511871 containerd[1835]: time="2025-03-25T02:31:32.511788156Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 02:31:32.511871 containerd[1835]: time="2025-03-25T02:31:32.511805174Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 02:31:32.511871 containerd[1835]: time="2025-03-25T02:31:32.511816236Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 02:31:32.511871 containerd[1835]: time="2025-03-25T02:31:32.511823001Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 02:31:32.511871 containerd[1835]: time="2025-03-25T02:31:32.511828953Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 02:31:32.511871 containerd[1835]: time="2025-03-25T02:31:32.511841236Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511910419Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511924091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511931829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511939149Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511945182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511951488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511958118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511964298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511973045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511979972Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.511986350Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.512022940Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 02:31:32.512034 containerd[1835]: time="2025-03-25T02:31:32.512031053Z" level=info msg="Start snapshots syncer" Mar 25 02:31:32.512221 containerd[1835]: time="2025-03-25T02:31:32.512042054Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 02:31:32.512221 containerd[1835]: time="2025-03-25T02:31:32.512189245Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 02:31:32.512296 containerd[1835]: time="2025-03-25T02:31:32.512217823Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 02:31:32.512296 containerd[1835]: time="2025-03-25T02:31:32.512256969Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 02:31:32.512325 containerd[1835]: time="2025-03-25T02:31:32.512309459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 02:31:32.512362 containerd[1835]: time="2025-03-25T02:31:32.512323073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 02:31:32.512362 containerd[1835]: time="2025-03-25T02:31:32.512330344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 02:31:32.512362 containerd[1835]: time="2025-03-25T02:31:32.512343180Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 02:31:32.512362 containerd[1835]: time="2025-03-25T02:31:32.512351619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 02:31:32.512362 containerd[1835]: time="2025-03-25T02:31:32.512357685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 02:31:32.512427 containerd[1835]: time="2025-03-25T02:31:32.512368821Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 02:31:32.512427 containerd[1835]: time="2025-03-25T02:31:32.512383578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 02:31:32.512427 containerd[1835]: time="2025-03-25T02:31:32.512391534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 02:31:32.512427 containerd[1835]: time="2025-03-25T02:31:32.512397029Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 02:31:32.512427 containerd[1835]: time="2025-03-25T02:31:32.512413129Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 02:31:32.512427 containerd[1835]: time="2025-03-25T02:31:32.512422195Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512427795Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512433246Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512437981Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512446589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512452775Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512462022Z" level=info msg="runtime interface created" Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512465298Z" level=info msg="created NRI interface" Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512473064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512488398Z" level=info msg="Connect containerd service" Mar 25 02:31:32.512507 containerd[1835]: time="2025-03-25T02:31:32.512505655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 02:31:32.512549 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Mar 25 02:31:32.513245 containerd[1835]: time="2025-03-25T02:31:32.513206335Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 02:31:32.521816 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 02:31:32.590370 tar[1832]: linux-amd64/LICENSE Mar 25 02:31:32.590370 tar[1832]: linux-amd64/README.md Mar 25 02:31:32.599863 containerd[1835]: time="2025-03-25T02:31:32.599840925Z" level=info msg="Start subscribing containerd event" Mar 25 02:31:32.599926 containerd[1835]: time="2025-03-25T02:31:32.599872494Z" level=info msg="Start recovering state" Mar 25 02:31:32.599926 containerd[1835]: time="2025-03-25T02:31:32.599901643Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 02:31:32.599994 containerd[1835]: time="2025-03-25T02:31:32.599932512Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 02:31:32.599994 containerd[1835]: time="2025-03-25T02:31:32.599951389Z" level=info msg="Start event monitor" Mar 25 02:31:32.599994 containerd[1835]: time="2025-03-25T02:31:32.599964089Z" level=info msg="Start cni network conf syncer for default" Mar 25 02:31:32.599994 containerd[1835]: time="2025-03-25T02:31:32.599971865Z" level=info msg="Start streaming server" Mar 25 02:31:32.599994 containerd[1835]: time="2025-03-25T02:31:32.599980678Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 02:31:32.599994 containerd[1835]: time="2025-03-25T02:31:32.599987425Z" level=info msg="runtime interface starting up..." Mar 25 02:31:32.599994 containerd[1835]: time="2025-03-25T02:31:32.599992475Z" level=info msg="starting plugins..." Mar 25 02:31:32.600137 containerd[1835]: time="2025-03-25T02:31:32.600004435Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 02:31:32.600137 containerd[1835]: time="2025-03-25T02:31:32.600088671Z" level=info msg="containerd successfully booted in 0.105443s" Mar 25 02:31:32.606716 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 02:31:32.618180 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 02:31:32.625563 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Mar 25 02:31:32.646702 systemd-networkd[1742]: bond0: Gained IPv6LL Mar 25 02:31:32.646950 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:32.649875 extend-filesystems[1805]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Mar 25 02:31:32.649875 extend-filesystems[1805]: old_desc_blocks = 1, new_desc_blocks = 56 Mar 25 02:31:32.649875 extend-filesystems[1805]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Mar 25 02:31:32.690656 extend-filesystems[1795]: Resized filesystem in /dev/sdb9 Mar 25 02:31:32.650500 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 02:31:32.650611 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 02:31:33.542953 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:33.543033 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:33.543998 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 02:31:33.555335 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 02:31:33.565808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:31:33.585034 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 02:31:33.612689 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 02:31:34.232371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:31:34.244027 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:31:34.315825 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Mar 25 02:31:34.315939 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Mar 25 02:31:34.671589 kubelet[1938]: E0325 02:31:34.671536 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:31:34.672987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:31:34.673073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:31:34.673246 systemd[1]: kubelet.service: Consumed 557ms CPU time, 244.6M memory peak. Mar 25 02:31:35.754771 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 02:31:35.769496 systemd[1]: Started sshd@0-86.109.11.215:22-139.178.68.195:33500.service - OpenSSH per-connection server daemon (139.178.68.195:33500). Mar 25 02:31:35.836882 sshd[1960]: Accepted publickey for core from 139.178.68.195 port 33500 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:35.838039 sshd-session[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:35.844717 coreos-metadata[1789]: Mar 25 02:31:35.844 INFO Fetch successful Mar 25 02:31:35.845118 systemd-logind[1815]: New session 1 of user core. Mar 25 02:31:35.845958 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 02:31:35.856395 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 02:31:35.887133 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 02:31:35.899385 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 02:31:35.911284 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Mar 25 02:31:35.932445 coreos-metadata[1885]: Mar 25 02:31:35.932 INFO Fetch successful Mar 25 02:31:35.938088 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 02:31:35.959926 (systemd)[1970]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 02:31:35.961395 systemd-logind[1815]: New session c1 of user core. Mar 25 02:31:35.963384 unknown[1885]: wrote ssh authorized keys file for user: core Mar 25 02:31:35.979146 update-ssh-keys[1972]: Updated "/home/core/.ssh/authorized_keys" Mar 25 02:31:35.979439 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 02:31:35.991392 systemd[1]: Finished sshkeys.service. Mar 25 02:31:36.065220 systemd[1970]: Queued start job for default target default.target. Mar 25 02:31:36.080233 systemd[1970]: Created slice app.slice - User Application Slice. Mar 25 02:31:36.080247 systemd[1970]: Reached target paths.target - Paths. Mar 25 02:31:36.080268 systemd[1970]: Reached target timers.target - Timers. Mar 25 02:31:36.080931 systemd[1970]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 02:31:36.086624 systemd[1970]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 02:31:36.086655 systemd[1970]: Reached target sockets.target - Sockets. Mar 25 02:31:36.086679 systemd[1970]: Reached target basic.target - Basic System. Mar 25 02:31:36.086721 systemd[1970]: Reached target default.target - Main User Target. Mar 25 02:31:36.086737 systemd[1970]: Startup finished in 121ms. Mar 25 02:31:36.086766 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 02:31:36.097547 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 02:31:36.162980 systemd[1]: Started sshd@1-86.109.11.215:22-139.178.68.195:33502.service - OpenSSH per-connection server daemon (139.178.68.195:33502). Mar 25 02:31:36.218649 sshd[1985]: Accepted publickey for core from 139.178.68.195 port 33502 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:36.219322 sshd-session[1985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:36.222440 systemd-logind[1815]: New session 2 of user core. Mar 25 02:31:36.242837 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 02:31:36.307218 sshd[1987]: Connection closed by 139.178.68.195 port 33502 Mar 25 02:31:36.307377 sshd-session[1985]: pam_unix(sshd:session): session closed for user core Mar 25 02:31:36.326201 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Mar 25 02:31:36.339195 systemd[1]: sshd@1-86.109.11.215:22-139.178.68.195:33502.service: Deactivated successfully. Mar 25 02:31:36.343251 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 02:31:36.345394 systemd-logind[1815]: Session 2 logged out. Waiting for processes to exit. Mar 25 02:31:36.349502 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 02:31:36.363174 systemd[1]: Started sshd@2-86.109.11.215:22-139.178.68.195:33512.service - OpenSSH per-connection server daemon (139.178.68.195:33512). Mar 25 02:31:36.376478 systemd[1]: Startup finished in 2.946s (kernel) + 22.330s (initrd) + 9.249s (userspace) = 34.526s. Mar 25 02:31:36.377725 systemd-logind[1815]: Removed session 2. Mar 25 02:31:36.414492 login[1896]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 02:31:36.418817 systemd-logind[1815]: New session 3 of user core. Mar 25 02:31:36.420370 login[1894]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 02:31:36.437874 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 02:31:36.440309 systemd-logind[1815]: New session 4 of user core. Mar 25 02:31:36.440958 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 02:31:36.461447 sshd[1993]: Accepted publickey for core from 139.178.68.195 port 33512 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:36.462209 sshd-session[1993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:36.464877 systemd-logind[1815]: New session 5 of user core. Mar 25 02:31:36.475845 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 02:31:36.533031 sshd[2023]: Connection closed by 139.178.68.195 port 33512 Mar 25 02:31:36.533754 sshd-session[1993]: pam_unix(sshd:session): session closed for user core Mar 25 02:31:36.540365 systemd[1]: sshd@2-86.109.11.215:22-139.178.68.195:33512.service: Deactivated successfully. Mar 25 02:31:36.544538 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 02:31:36.547824 systemd-logind[1815]: Session 5 logged out. Waiting for processes to exit. Mar 25 02:31:36.550668 systemd-logind[1815]: Removed session 5. Mar 25 02:31:37.457441 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:38.240079 systemd[1]: Started sshd@3-86.109.11.215:22-74.82.195.39:39722.service - OpenSSH per-connection server daemon (74.82.195.39:39722). Mar 25 02:31:39.022203 sshd[2029]: Invalid user tere from 74.82.195.39 port 39722 Mar 25 02:31:39.202474 sshd[2029]: Received disconnect from 74.82.195.39 port 39722:11: Bye Bye [preauth] Mar 25 02:31:39.202474 sshd[2029]: Disconnected from invalid user tere 74.82.195.39 port 39722 [preauth] Mar 25 02:31:39.205743 systemd[1]: sshd@3-86.109.11.215:22-74.82.195.39:39722.service: Deactivated successfully. Mar 25 02:31:44.790628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 02:31:44.793578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:31:45.041111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:31:45.043149 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:31:45.067520 kubelet[2041]: E0325 02:31:45.067498 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:31:45.069484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:31:45.069566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:31:45.069772 systemd[1]: kubelet.service: Consumed 157ms CPU time, 104.6M memory peak. Mar 25 02:31:46.555213 systemd[1]: Started sshd@4-86.109.11.215:22-139.178.68.195:58548.service - OpenSSH per-connection server daemon (139.178.68.195:58548). Mar 25 02:31:46.599931 sshd[2061]: Accepted publickey for core from 139.178.68.195 port 58548 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:46.600723 sshd-session[2061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:46.604085 systemd-logind[1815]: New session 6 of user core. Mar 25 02:31:46.614820 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 02:31:46.667341 sshd[2063]: Connection closed by 139.178.68.195 port 58548 Mar 25 02:31:46.667483 sshd-session[2061]: pam_unix(sshd:session): session closed for user core Mar 25 02:31:46.682883 systemd[1]: sshd@4-86.109.11.215:22-139.178.68.195:58548.service: Deactivated successfully. Mar 25 02:31:46.683752 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 02:31:46.684282 systemd-logind[1815]: Session 6 logged out. Waiting for processes to exit. Mar 25 02:31:46.685277 systemd[1]: Started sshd@5-86.109.11.215:22-139.178.68.195:58554.service - OpenSSH per-connection server daemon (139.178.68.195:58554). Mar 25 02:31:46.685872 systemd-logind[1815]: Removed session 6. Mar 25 02:31:46.739087 sshd[2068]: Accepted publickey for core from 139.178.68.195 port 58554 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:46.740141 sshd-session[2068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:46.744876 systemd-logind[1815]: New session 7 of user core. Mar 25 02:31:46.755868 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 02:31:46.811686 sshd[2071]: Connection closed by 139.178.68.195 port 58554 Mar 25 02:31:46.812261 sshd-session[2068]: pam_unix(sshd:session): session closed for user core Mar 25 02:31:46.834253 systemd[1]: sshd@5-86.109.11.215:22-139.178.68.195:58554.service: Deactivated successfully. Mar 25 02:31:46.838120 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 02:31:46.840331 systemd-logind[1815]: Session 7 logged out. Waiting for processes to exit. Mar 25 02:31:46.844447 systemd[1]: Started sshd@6-86.109.11.215:22-139.178.68.195:58560.service - OpenSSH per-connection server daemon (139.178.68.195:58560). Mar 25 02:31:46.847159 systemd-logind[1815]: Removed session 7. Mar 25 02:31:46.930829 sshd[2076]: Accepted publickey for core from 139.178.68.195 port 58560 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:46.933980 sshd-session[2076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:46.946474 systemd-logind[1815]: New session 8 of user core. Mar 25 02:31:46.966125 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 02:31:47.034913 sshd[2079]: Connection closed by 139.178.68.195 port 58560 Mar 25 02:31:47.035593 sshd-session[2076]: pam_unix(sshd:session): session closed for user core Mar 25 02:31:47.053314 systemd[1]: sshd@6-86.109.11.215:22-139.178.68.195:58560.service: Deactivated successfully. Mar 25 02:31:47.057165 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 02:31:47.059303 systemd-logind[1815]: Session 8 logged out. Waiting for processes to exit. Mar 25 02:31:47.063548 systemd[1]: Started sshd@7-86.109.11.215:22-139.178.68.195:58574.service - OpenSSH per-connection server daemon (139.178.68.195:58574). Mar 25 02:31:47.066574 systemd-logind[1815]: Removed session 8. Mar 25 02:31:47.108924 sshd[2084]: Accepted publickey for core from 139.178.68.195 port 58574 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:47.109597 sshd-session[2084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:47.112547 systemd-logind[1815]: New session 9 of user core. Mar 25 02:31:47.124825 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 02:31:47.216827 sudo[2088]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 02:31:47.217145 sudo[2088]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:31:47.247921 sudo[2088]: pam_unix(sudo:session): session closed for user root Mar 25 02:31:47.248687 sshd[2087]: Connection closed by 139.178.68.195 port 58574 Mar 25 02:31:47.248898 sshd-session[2084]: pam_unix(sshd:session): session closed for user core Mar 25 02:31:47.269045 systemd[1]: sshd@7-86.109.11.215:22-139.178.68.195:58574.service: Deactivated successfully. Mar 25 02:31:47.270007 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 02:31:47.270542 systemd-logind[1815]: Session 9 logged out. Waiting for processes to exit. Mar 25 02:31:47.271697 systemd[1]: Started sshd@8-86.109.11.215:22-139.178.68.195:58590.service - OpenSSH per-connection server daemon (139.178.68.195:58590). Mar 25 02:31:47.272396 systemd-logind[1815]: Removed session 9. Mar 25 02:31:47.313523 sshd[2093]: Accepted publickey for core from 139.178.68.195 port 58590 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:47.314670 sshd-session[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:47.319353 systemd-logind[1815]: New session 10 of user core. Mar 25 02:31:47.339110 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 02:31:47.398496 sudo[2098]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 02:31:47.398647 sudo[2098]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:31:47.400589 sudo[2098]: pam_unix(sudo:session): session closed for user root Mar 25 02:31:47.403242 sudo[2097]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 02:31:47.403395 sudo[2097]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:31:47.409168 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 02:31:47.441518 augenrules[2120]: No rules Mar 25 02:31:47.441903 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 02:31:47.442014 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 02:31:47.442631 sudo[2097]: pam_unix(sudo:session): session closed for user root Mar 25 02:31:47.443338 sshd[2096]: Connection closed by 139.178.68.195 port 58590 Mar 25 02:31:47.443491 sshd-session[2093]: pam_unix(sshd:session): session closed for user core Mar 25 02:31:47.457806 systemd[1]: sshd@8-86.109.11.215:22-139.178.68.195:58590.service: Deactivated successfully. Mar 25 02:31:47.458691 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 02:31:47.459233 systemd-logind[1815]: Session 10 logged out. Waiting for processes to exit. Mar 25 02:31:47.460197 systemd[1]: Started sshd@9-86.109.11.215:22-139.178.68.195:58602.service - OpenSSH per-connection server daemon (139.178.68.195:58602). Mar 25 02:31:47.460782 systemd-logind[1815]: Removed session 10. Mar 25 02:31:47.498057 sshd[2128]: Accepted publickey for core from 139.178.68.195 port 58602 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:31:47.498952 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:31:47.502827 systemd-logind[1815]: New session 11 of user core. Mar 25 02:31:47.513751 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 02:31:47.577123 sudo[2132]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 02:31:47.578047 sudo[2132]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:31:47.912800 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 02:31:47.935965 (dockerd)[2157]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 02:31:48.252402 dockerd[2157]: time="2025-03-25T02:31:48.252336250Z" level=info msg="Starting up" Mar 25 02:31:48.253256 dockerd[2157]: time="2025-03-25T02:31:48.253211993Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 02:31:48.280808 dockerd[2157]: time="2025-03-25T02:31:48.280743795Z" level=info msg="Loading containers: start." Mar 25 02:31:48.409570 kernel: Initializing XFRM netlink socket Mar 25 02:31:48.409932 systemd-timesyncd[1744]: Network configuration changed, trying to establish connection. Mar 25 02:31:48.484603 systemd-networkd[1742]: docker0: Link UP Mar 25 02:31:48.577127 dockerd[2157]: time="2025-03-25T02:31:48.577003778Z" level=info msg="Loading containers: done." Mar 25 02:31:48.134895 systemd-resolved[1743]: Clock change detected. Flushing caches. Mar 25 02:31:48.145357 systemd-journald[1372]: Time jumped backwards, rotating. Mar 25 02:31:48.145406 dockerd[2157]: time="2025-03-25T02:31:48.144779729Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 02:31:48.145406 dockerd[2157]: time="2025-03-25T02:31:48.144934969Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 02:31:48.145406 dockerd[2157]: time="2025-03-25T02:31:48.144996929Z" level=info msg="Daemon has completed initialization" Mar 25 02:31:48.135230 systemd-timesyncd[1744]: Contacted time server [2604:2dc0:202:300::13ac]:123 (2.flatcar.pool.ntp.org). Mar 25 02:31:48.135349 systemd-timesyncd[1744]: Initial clock synchronization to Tue 2025-03-25 02:31:48.134765 UTC. Mar 25 02:31:48.177891 dockerd[2157]: time="2025-03-25T02:31:48.177829437Z" level=info msg="API listen on /run/docker.sock" Mar 25 02:31:48.177941 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 02:31:48.964285 containerd[1835]: time="2025-03-25T02:31:48.964143174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 25 02:31:49.669875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367727969.mount: Deactivated successfully. Mar 25 02:31:50.414341 containerd[1835]: time="2025-03-25T02:31:50.414315046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:50.414567 containerd[1835]: time="2025-03-25T02:31:50.414411103Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959268" Mar 25 02:31:50.414878 containerd[1835]: time="2025-03-25T02:31:50.414834533Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:50.416431 containerd[1835]: time="2025-03-25T02:31:50.416417019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:50.416892 containerd[1835]: time="2025-03-25T02:31:50.416877222Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 1.451794701s" Mar 25 02:31:50.416923 containerd[1835]: time="2025-03-25T02:31:50.416898297Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 25 02:31:50.418047 containerd[1835]: time="2025-03-25T02:31:50.417996293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 25 02:31:51.448042 containerd[1835]: time="2025-03-25T02:31:51.447994022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:51.448240 containerd[1835]: time="2025-03-25T02:31:51.448112741Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713776" Mar 25 02:31:51.448567 containerd[1835]: time="2025-03-25T02:31:51.448530672Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:51.449826 containerd[1835]: time="2025-03-25T02:31:51.449789102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:51.450378 containerd[1835]: time="2025-03-25T02:31:51.450340546Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 1.03232678s" Mar 25 02:31:51.450378 containerd[1835]: time="2025-03-25T02:31:51.450358562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 25 02:31:51.450676 containerd[1835]: time="2025-03-25T02:31:51.450642111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 25 02:31:52.285369 containerd[1835]: time="2025-03-25T02:31:52.285315480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:52.285608 containerd[1835]: time="2025-03-25T02:31:52.285553906Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780368" Mar 25 02:31:52.285895 containerd[1835]: time="2025-03-25T02:31:52.285854847Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:52.287217 containerd[1835]: time="2025-03-25T02:31:52.287177920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:52.287798 containerd[1835]: time="2025-03-25T02:31:52.287754211Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 837.096946ms" Mar 25 02:31:52.287798 containerd[1835]: time="2025-03-25T02:31:52.287771792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 25 02:31:52.288111 containerd[1835]: time="2025-03-25T02:31:52.288071015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 25 02:31:53.088749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490679407.mount: Deactivated successfully. Mar 25 02:31:53.275813 containerd[1835]: time="2025-03-25T02:31:53.275788578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:53.276049 containerd[1835]: time="2025-03-25T02:31:53.275988726Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 25 02:31:53.276371 containerd[1835]: time="2025-03-25T02:31:53.276335524Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:53.277128 containerd[1835]: time="2025-03-25T02:31:53.277086372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:53.277508 containerd[1835]: time="2025-03-25T02:31:53.277459865Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 989.37086ms" Mar 25 02:31:53.277508 containerd[1835]: time="2025-03-25T02:31:53.277491571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 25 02:31:53.277845 containerd[1835]: time="2025-03-25T02:31:53.277805012Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 02:31:53.748156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646772815.mount: Deactivated successfully. Mar 25 02:31:54.722755 containerd[1835]: time="2025-03-25T02:31:54.722696389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:54.722954 containerd[1835]: time="2025-03-25T02:31:54.722896946Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 25 02:31:54.723291 containerd[1835]: time="2025-03-25T02:31:54.723252142Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:54.724823 containerd[1835]: time="2025-03-25T02:31:54.724810845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:54.725231 containerd[1835]: time="2025-03-25T02:31:54.725216356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.447394693s" Mar 25 02:31:54.725266 containerd[1835]: time="2025-03-25T02:31:54.725233179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 02:31:54.726238 containerd[1835]: time="2025-03-25T02:31:54.726194619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 02:31:54.827339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 02:31:54.829924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:31:55.068080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:31:55.070204 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:31:55.092994 kubelet[2509]: E0325 02:31:55.092939 2509 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:31:55.094116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:31:55.094205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:31:55.094391 systemd[1]: kubelet.service: Consumed 114ms CPU time, 104M memory peak. Mar 25 02:31:55.242646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846361049.mount: Deactivated successfully. Mar 25 02:31:55.243949 containerd[1835]: time="2025-03-25T02:31:55.243931567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:31:55.244153 containerd[1835]: time="2025-03-25T02:31:55.244128687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 25 02:31:55.244636 containerd[1835]: time="2025-03-25T02:31:55.244611298Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:31:55.245706 containerd[1835]: time="2025-03-25T02:31:55.245693421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:31:55.246335 containerd[1835]: time="2025-03-25T02:31:55.246322730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 520.111582ms" Mar 25 02:31:55.246368 containerd[1835]: time="2025-03-25T02:31:55.246338655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 25 02:31:55.246823 containerd[1835]: time="2025-03-25T02:31:55.246800456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 25 02:31:55.738896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069234525.mount: Deactivated successfully. Mar 25 02:31:56.800146 containerd[1835]: time="2025-03-25T02:31:56.800119534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:56.800353 containerd[1835]: time="2025-03-25T02:31:56.800291164Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Mar 25 02:31:56.800678 containerd[1835]: time="2025-03-25T02:31:56.800666835Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:56.802180 containerd[1835]: time="2025-03-25T02:31:56.802168725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:31:56.802830 containerd[1835]: time="2025-03-25T02:31:56.802789226Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.555960067s" Mar 25 02:31:56.802830 containerd[1835]: time="2025-03-25T02:31:56.802805932Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 25 02:31:58.499194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:31:58.499305 systemd[1]: kubelet.service: Consumed 114ms CPU time, 104M memory peak. Mar 25 02:31:58.500805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:31:58.520871 systemd[1]: Reload requested from client PID 2634 ('systemctl') (unit session-11.scope)... Mar 25 02:31:58.520878 systemd[1]: Reloading... Mar 25 02:31:58.560411 zram_generator::config[2680]: No configuration found. Mar 25 02:31:58.629932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:31:58.711958 systemd[1]: Reloading finished in 190 ms. Mar 25 02:31:58.746084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:31:58.748219 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:31:58.748465 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 02:31:58.748581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:31:58.748623 systemd[1]: kubelet.service: Consumed 52ms CPU time, 83.5M memory peak. Mar 25 02:31:58.749386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:31:58.963329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:31:58.966996 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 02:31:58.987096 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:31:58.987096 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 02:31:58.987096 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:31:58.988013 kubelet[2749]: I0325 02:31:58.987963 2749 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 02:31:59.185397 kubelet[2749]: I0325 02:31:59.185354 2749 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 02:31:59.185397 kubelet[2749]: I0325 02:31:59.185368 2749 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 02:31:59.185584 kubelet[2749]: I0325 02:31:59.185548 2749 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 02:31:59.201137 kubelet[2749]: I0325 02:31:59.201082 2749 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 02:31:59.201984 kubelet[2749]: E0325 02:31:59.201904 2749 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://86.109.11.215:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 86.109.11.215:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:31:59.208521 kubelet[2749]: I0325 02:31:59.208489 2749 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 02:31:59.219780 kubelet[2749]: I0325 02:31:59.219770 2749 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 02:31:59.235604 kubelet[2749]: I0325 02:31:59.235530 2749 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 02:31:59.235739 kubelet[2749]: I0325 02:31:59.235672 2749 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 02:31:59.235866 kubelet[2749]: I0325 02:31:59.235694 2749 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-336c6dbb20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 02:31:59.235866 kubelet[2749]: I0325 02:31:59.235860 2749 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 02:31:59.235866 kubelet[2749]: I0325 02:31:59.235867 2749 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 02:31:59.235963 kubelet[2749]: I0325 02:31:59.235942 2749 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:31:59.239205 kubelet[2749]: I0325 02:31:59.239169 2749 kubelet.go:408] "Attempting to sync node with API server" Mar 25 02:31:59.239205 kubelet[2749]: I0325 02:31:59.239182 2749 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 02:31:59.239205 kubelet[2749]: I0325 02:31:59.239199 2749 kubelet.go:314] "Adding apiserver pod source" Mar 25 02:31:59.239261 kubelet[2749]: I0325 02:31:59.239211 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 02:31:59.242572 kubelet[2749]: W0325 02:31:59.242519 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://86.109.11.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-336c6dbb20&limit=500&resourceVersion=0": dial tcp 86.109.11.215:6443: connect: connection refused Mar 25 02:31:59.242572 kubelet[2749]: E0325 02:31:59.242551 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://86.109.11.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-a-336c6dbb20&limit=500&resourceVersion=0\": dial tcp 86.109.11.215:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:31:59.243277 kubelet[2749]: I0325 02:31:59.243228 2749 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 02:31:59.243776 kubelet[2749]: W0325 02:31:59.243727 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://86.109.11.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 86.109.11.215:6443: connect: connection refused Mar 25 02:31:59.243776 kubelet[2749]: E0325 02:31:59.243759 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://86.109.11.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 86.109.11.215:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:31:59.244540 kubelet[2749]: I0325 02:31:59.244505 2749 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 02:31:59.245138 kubelet[2749]: W0325 02:31:59.245102 2749 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 02:31:59.245460 kubelet[2749]: I0325 02:31:59.245414 2749 server.go:1269] "Started kubelet" Mar 25 02:31:59.245605 kubelet[2749]: I0325 02:31:59.245513 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 02:31:59.245696 kubelet[2749]: I0325 02:31:59.245619 2749 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 02:31:59.245809 kubelet[2749]: I0325 02:31:59.245791 2749 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 02:31:59.246437 kubelet[2749]: I0325 02:31:59.246429 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 02:31:59.246507 kubelet[2749]: I0325 02:31:59.246492 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 02:31:59.246551 kubelet[2749]: I0325 02:31:59.246509 2749 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 02:31:59.246551 kubelet[2749]: I0325 02:31:59.246497 2749 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 02:31:59.246551 kubelet[2749]: E0325 02:31:59.246532 2749 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-336c6dbb20\" not found" Mar 25 02:31:59.246708 kubelet[2749]: I0325 02:31:59.246700 2749 reconciler.go:26] "Reconciler: start to sync state" Mar 25 02:31:59.246751 kubelet[2749]: I0325 02:31:59.246744 2749 server.go:460] "Adding debug handlers to kubelet server" Mar 25 02:31:59.247750 kubelet[2749]: E0325 02:31:59.247736 2749 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 02:31:59.247803 kubelet[2749]: E0325 02:31:59.247761 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-336c6dbb20?timeout=10s\": dial tcp 86.109.11.215:6443: connect: connection refused" interval="200ms" Mar 25 02:31:59.247997 kubelet[2749]: W0325 02:31:59.247923 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://86.109.11.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.215:6443: connect: connection refused Mar 25 02:31:59.247997 kubelet[2749]: I0325 02:31:59.247908 2749 factory.go:221] Registration of the systemd container factory successfully Mar 25 02:31:59.248074 kubelet[2749]: E0325 02:31:59.248058 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://86.109.11.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 86.109.11.215:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:31:59.248148 kubelet[2749]: I0325 02:31:59.248137 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 02:31:59.248868 kubelet[2749]: I0325 02:31:59.248860 2749 factory.go:221] Registration of the containerd container factory successfully Mar 25 02:31:59.250576 kubelet[2749]: E0325 02:31:59.248657 2749 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://86.109.11.215:6443/api/v1/namespaces/default/events\": dial tcp 86.109.11.215:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-a-336c6dbb20.182feaf51a55f100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-a-336c6dbb20,UID:ci-4284.0.0-a-336c6dbb20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-a-336c6dbb20,},FirstTimestamp:2025-03-25 02:31:59.245398272 +0000 UTC m=+0.276403267,LastTimestamp:2025-03-25 02:31:59.245398272 +0000 UTC m=+0.276403267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-a-336c6dbb20,}" Mar 25 02:31:59.256485 kubelet[2749]: I0325 02:31:59.256474 2749 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 02:31:59.256485 kubelet[2749]: I0325 02:31:59.256482 2749 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 02:31:59.256551 kubelet[2749]: I0325 02:31:59.256492 2749 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:31:59.256551 kubelet[2749]: I0325 02:31:59.256511 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 02:31:59.257030 kubelet[2749]: I0325 02:31:59.257022 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 02:31:59.257063 kubelet[2749]: I0325 02:31:59.257037 2749 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 02:31:59.257063 kubelet[2749]: I0325 02:31:59.257047 2749 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 02:31:59.257114 kubelet[2749]: E0325 02:31:59.257067 2749 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 02:31:59.257348 kubelet[2749]: W0325 02:31:59.257332 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://86.109.11.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.215:6443: connect: connection refused Mar 25 02:31:59.257375 kubelet[2749]: I0325 02:31:59.257352 2749 policy_none.go:49] "None policy: Start" Mar 25 02:31:59.257375 kubelet[2749]: E0325 02:31:59.257356 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://86.109.11.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 86.109.11.215:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:31:59.257581 kubelet[2749]: I0325 02:31:59.257574 2749 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 02:31:59.257603 kubelet[2749]: I0325 02:31:59.257584 2749 state_mem.go:35] "Initializing new in-memory state store" Mar 25 02:31:59.261045 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 02:31:59.282421 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 02:31:59.284573 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 02:31:59.304754 kubelet[2749]: I0325 02:31:59.304698 2749 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 02:31:59.304950 kubelet[2749]: I0325 02:31:59.304909 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 02:31:59.304950 kubelet[2749]: I0325 02:31:59.304922 2749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 02:31:59.305098 kubelet[2749]: I0325 02:31:59.305078 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 02:31:59.306030 kubelet[2749]: E0325 02:31:59.306006 2749 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-a-336c6dbb20\" not found" Mar 25 02:31:59.379221 systemd[1]: Created slice kubepods-burstable-pod04f1efaf3c4d6479bc5da7cd7e49426a.slice - libcontainer container kubepods-burstable-pod04f1efaf3c4d6479bc5da7cd7e49426a.slice. Mar 25 02:31:59.412166 kubelet[2749]: I0325 02:31:59.412078 2749 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.412953 kubelet[2749]: E0325 02:31:59.412841 2749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://86.109.11.215:6443/api/v1/nodes\": dial tcp 86.109.11.215:6443: connect: connection refused" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.426332 systemd[1]: Created slice kubepods-burstable-pod6c6adc9e27c9a8b8a90076342a596f25.slice - libcontainer container kubepods-burstable-pod6c6adc9e27c9a8b8a90076342a596f25.slice. Mar 25 02:31:59.435379 systemd[1]: Created slice kubepods-burstable-pod9daf12821b17fdf8981193fc8d53a01d.slice - libcontainer container kubepods-burstable-pod9daf12821b17fdf8981193fc8d53a01d.slice. Mar 25 02:31:59.448671 kubelet[2749]: E0325 02:31:59.448552 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-336c6dbb20?timeout=10s\": dial tcp 86.109.11.215:6443: connect: connection refused" interval="400ms" Mar 25 02:31:59.548896 kubelet[2749]: I0325 02:31:59.548774 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04f1efaf3c4d6479bc5da7cd7e49426a-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" (UID: \"04f1efaf3c4d6479bc5da7cd7e49426a\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.548896 kubelet[2749]: I0325 02:31:59.548894 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04f1efaf3c4d6479bc5da7cd7e49426a-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" (UID: \"04f1efaf3c4d6479bc5da7cd7e49426a\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.549246 kubelet[2749]: I0325 02:31:59.548976 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04f1efaf3c4d6479bc5da7cd7e49426a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" (UID: \"04f1efaf3c4d6479bc5da7cd7e49426a\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.549246 kubelet[2749]: I0325 02:31:59.549052 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.549246 kubelet[2749]: I0325 02:31:59.549119 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9daf12821b17fdf8981193fc8d53a01d-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-336c6dbb20\" (UID: \"9daf12821b17fdf8981193fc8d53a01d\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.549246 kubelet[2749]: I0325 02:31:59.549178 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.549246 kubelet[2749]: I0325 02:31:59.549240 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.549742 kubelet[2749]: I0325 02:31:59.549302 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.549742 kubelet[2749]: I0325 02:31:59.549361 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.617050 kubelet[2749]: I0325 02:31:59.616998 2749 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.617797 kubelet[2749]: E0325 02:31:59.617732 2749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://86.109.11.215:6443/api/v1/nodes\": dial tcp 86.109.11.215:6443: connect: connection refused" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:31:59.720223 containerd[1835]: time="2025-03-25T02:31:59.719978932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-336c6dbb20,Uid:04f1efaf3c4d6479bc5da7cd7e49426a,Namespace:kube-system,Attempt:0,}" Mar 25 02:31:59.729313 containerd[1835]: time="2025-03-25T02:31:59.729268341Z" level=info msg="connecting to shim 71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1" address="unix:///run/containerd/s/986c9217a9cf587a2fbd8c394825189682df49fa808bdb468256637d3b0b7555" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:31:59.731349 containerd[1835]: time="2025-03-25T02:31:59.731336407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-336c6dbb20,Uid:6c6adc9e27c9a8b8a90076342a596f25,Namespace:kube-system,Attempt:0,}" Mar 25 02:31:59.739256 containerd[1835]: time="2025-03-25T02:31:59.739231968Z" level=info msg="connecting to shim 791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3" address="unix:///run/containerd/s/a026e7f0a53650f8315881df0f837989260c522215ba4a10d7e1bf4e74783648" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:31:59.740230 containerd[1835]: time="2025-03-25T02:31:59.740211511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-336c6dbb20,Uid:9daf12821b17fdf8981193fc8d53a01d,Namespace:kube-system,Attempt:0,}" Mar 25 02:31:59.748509 containerd[1835]: time="2025-03-25T02:31:59.748417743Z" level=info msg="connecting to shim a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53" address="unix:///run/containerd/s/7b9c2de04cdcf4e2c177b9ed290661cf731a0ecc9b4d8f6bae22240af25f51cf" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:31:59.754577 systemd[1]: Started cri-containerd-71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1.scope - libcontainer container 71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1. Mar 25 02:31:59.760506 systemd[1]: Started cri-containerd-791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3.scope - libcontainer container 791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3. Mar 25 02:31:59.761330 systemd[1]: Started cri-containerd-a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53.scope - libcontainer container a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53. Mar 25 02:31:59.783247 containerd[1835]: time="2025-03-25T02:31:59.783218428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-a-336c6dbb20,Uid:04f1efaf3c4d6479bc5da7cd7e49426a,Namespace:kube-system,Attempt:0,} returns sandbox id \"71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1\"" Mar 25 02:31:59.784720 containerd[1835]: time="2025-03-25T02:31:59.784698759Z" level=info msg="CreateContainer within sandbox \"71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 02:31:59.787728 containerd[1835]: time="2025-03-25T02:31:59.787709626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-a-336c6dbb20,Uid:6c6adc9e27c9a8b8a90076342a596f25,Namespace:kube-system,Attempt:0,} returns sandbox id \"791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3\"" Mar 25 02:31:59.788016 containerd[1835]: time="2025-03-25T02:31:59.788003570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-a-336c6dbb20,Uid:9daf12821b17fdf8981193fc8d53a01d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53\"" Mar 25 02:31:59.788607 containerd[1835]: time="2025-03-25T02:31:59.788595977Z" level=info msg="CreateContainer within sandbox \"791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 02:31:59.788711 containerd[1835]: time="2025-03-25T02:31:59.788701714Z" level=info msg="Container 32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:31:59.788801 containerd[1835]: time="2025-03-25T02:31:59.788784964Z" level=info msg="CreateContainer within sandbox \"a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 02:31:59.793033 containerd[1835]: time="2025-03-25T02:31:59.792994350Z" level=info msg="CreateContainer within sandbox \"71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c\"" Mar 25 02:31:59.793109 containerd[1835]: time="2025-03-25T02:31:59.793097386Z" level=info msg="Container e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:31:59.793343 containerd[1835]: time="2025-03-25T02:31:59.793329091Z" level=info msg="StartContainer for \"32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c\"" Mar 25 02:31:59.793842 containerd[1835]: time="2025-03-25T02:31:59.793802297Z" level=info msg="Container 0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:31:59.794218 containerd[1835]: time="2025-03-25T02:31:59.794181101Z" level=info msg="connecting to shim 32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c" address="unix:///run/containerd/s/986c9217a9cf587a2fbd8c394825189682df49fa808bdb468256637d3b0b7555" protocol=ttrpc version=3 Mar 25 02:31:59.796077 containerd[1835]: time="2025-03-25T02:31:59.796036091Z" level=info msg="CreateContainer within sandbox \"791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0\"" Mar 25 02:31:59.796270 containerd[1835]: time="2025-03-25T02:31:59.796255686Z" level=info msg="StartContainer for \"e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0\"" Mar 25 02:31:59.796762 containerd[1835]: time="2025-03-25T02:31:59.796750131Z" level=info msg="CreateContainer within sandbox \"a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e\"" Mar 25 02:31:59.796808 containerd[1835]: time="2025-03-25T02:31:59.796792911Z" level=info msg="connecting to shim e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0" address="unix:///run/containerd/s/a026e7f0a53650f8315881df0f837989260c522215ba4a10d7e1bf4e74783648" protocol=ttrpc version=3 Mar 25 02:31:59.796927 containerd[1835]: time="2025-03-25T02:31:59.796892538Z" level=info msg="StartContainer for \"0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e\"" Mar 25 02:31:59.797457 containerd[1835]: time="2025-03-25T02:31:59.797398760Z" level=info msg="connecting to shim 0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e" address="unix:///run/containerd/s/7b9c2de04cdcf4e2c177b9ed290661cf731a0ecc9b4d8f6bae22240af25f51cf" protocol=ttrpc version=3 Mar 25 02:31:59.806585 systemd[1]: Started cri-containerd-32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c.scope - libcontainer container 32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c. Mar 25 02:31:59.809173 systemd[1]: Started cri-containerd-0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e.scope - libcontainer container 0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e. Mar 25 02:31:59.809858 systemd[1]: Started cri-containerd-e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0.scope - libcontainer container e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0. Mar 25 02:31:59.844032 containerd[1835]: time="2025-03-25T02:31:59.843996528Z" level=info msg="StartContainer for \"0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e\" returns successfully" Mar 25 02:31:59.844136 containerd[1835]: time="2025-03-25T02:31:59.844058503Z" level=info msg="StartContainer for \"e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0\" returns successfully" Mar 25 02:31:59.844168 containerd[1835]: time="2025-03-25T02:31:59.844151551Z" level=info msg="StartContainer for \"32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c\" returns successfully" Mar 25 02:31:59.849854 kubelet[2749]: E0325 02:31:59.849827 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-a-336c6dbb20?timeout=10s\": dial tcp 86.109.11.215:6443: connect: connection refused" interval="800ms" Mar 25 02:32:00.019979 kubelet[2749]: I0325 02:32:00.019901 2749 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:00.728667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806180589.mount: Deactivated successfully. Mar 25 02:32:00.879005 kubelet[2749]: E0325 02:32:00.878982 2749 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-a-336c6dbb20\" not found" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:00.975993 kubelet[2749]: I0325 02:32:00.975358 2749 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:00.975993 kubelet[2749]: E0325 02:32:00.975498 2749 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4284.0.0-a-336c6dbb20\": node \"ci-4284.0.0-a-336c6dbb20\" not found" Mar 25 02:32:01.241182 kubelet[2749]: I0325 02:32:01.241099 2749 apiserver.go:52] "Watching apiserver" Mar 25 02:32:01.247505 kubelet[2749]: I0325 02:32:01.247396 2749 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 02:32:01.272840 kubelet[2749]: E0325 02:32:01.272740 2749 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:03.410146 systemd[1]: Reload requested from client PID 3062 ('systemctl') (unit session-11.scope)... Mar 25 02:32:03.410154 systemd[1]: Reloading... Mar 25 02:32:03.459489 zram_generator::config[3108]: No configuration found. Mar 25 02:32:03.543429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:32:03.634731 systemd[1]: Reloading finished in 224 ms. Mar 25 02:32:03.666323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:32:03.677176 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 02:32:03.677319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:32:03.677348 systemd[1]: kubelet.service: Consumed 775ms CPU time, 131.2M memory peak. Mar 25 02:32:03.678946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:32:03.979945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:32:03.984017 (kubelet)[3173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 02:32:04.003884 kubelet[3173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:32:04.003884 kubelet[3173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 02:32:04.003884 kubelet[3173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:32:04.004108 kubelet[3173]: I0325 02:32:04.003916 3173 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 02:32:04.007957 kubelet[3173]: I0325 02:32:04.007940 3173 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 02:32:04.007957 kubelet[3173]: I0325 02:32:04.007953 3173 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 02:32:04.008171 kubelet[3173]: I0325 02:32:04.008163 3173 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 02:32:04.009373 kubelet[3173]: I0325 02:32:04.009336 3173 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 02:32:04.010508 kubelet[3173]: I0325 02:32:04.010455 3173 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 02:32:04.012300 kubelet[3173]: I0325 02:32:04.012285 3173 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 02:32:04.020092 kubelet[3173]: I0325 02:32:04.020080 3173 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 02:32:04.020155 kubelet[3173]: I0325 02:32:04.020143 3173 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 02:32:04.020223 kubelet[3173]: I0325 02:32:04.020207 3173 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 02:32:04.020324 kubelet[3173]: I0325 02:32:04.020224 3173 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-a-336c6dbb20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 02:32:04.020377 kubelet[3173]: I0325 02:32:04.020328 3173 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 02:32:04.020377 kubelet[3173]: I0325 02:32:04.020335 3173 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 02:32:04.020377 kubelet[3173]: I0325 02:32:04.020352 3173 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:32:04.020432 kubelet[3173]: I0325 02:32:04.020414 3173 kubelet.go:408] "Attempting to sync node with API server" Mar 25 02:32:04.020432 kubelet[3173]: I0325 02:32:04.020421 3173 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 02:32:04.020466 kubelet[3173]: I0325 02:32:04.020437 3173 kubelet.go:314] "Adding apiserver pod source" Mar 25 02:32:04.020466 kubelet[3173]: I0325 02:32:04.020444 3173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 02:32:04.020761 kubelet[3173]: I0325 02:32:04.020747 3173 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 02:32:04.020994 kubelet[3173]: I0325 02:32:04.020988 3173 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 02:32:04.021209 kubelet[3173]: I0325 02:32:04.021203 3173 server.go:1269] "Started kubelet" Mar 25 02:32:04.021242 kubelet[3173]: I0325 02:32:04.021232 3173 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 02:32:04.021287 kubelet[3173]: I0325 02:32:04.021256 3173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 02:32:04.021483 kubelet[3173]: I0325 02:32:04.021470 3173 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 02:32:04.022165 kubelet[3173]: I0325 02:32:04.022156 3173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 02:32:04.022210 kubelet[3173]: I0325 02:32:04.022175 3173 server.go:460] "Adding debug handlers to kubelet server" Mar 25 02:32:04.022210 kubelet[3173]: I0325 02:32:04.022181 3173 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 02:32:04.022310 kubelet[3173]: E0325 02:32:04.022280 3173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-a-336c6dbb20\" not found" Mar 25 02:32:04.022470 kubelet[3173]: I0325 02:32:04.022399 3173 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 02:32:04.022895 kubelet[3173]: I0325 02:32:04.022602 3173 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 02:32:04.022895 kubelet[3173]: I0325 02:32:04.022831 3173 reconciler.go:26] "Reconciler: start to sync state" Mar 25 02:32:04.023127 kubelet[3173]: E0325 02:32:04.023096 3173 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 02:32:04.023531 kubelet[3173]: I0325 02:32:04.023510 3173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 02:32:04.024615 kubelet[3173]: I0325 02:32:04.024605 3173 factory.go:221] Registration of the containerd container factory successfully Mar 25 02:32:04.024615 kubelet[3173]: I0325 02:32:04.024615 3173 factory.go:221] Registration of the systemd container factory successfully Mar 25 02:32:04.027546 kubelet[3173]: I0325 02:32:04.027526 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 02:32:04.028133 kubelet[3173]: I0325 02:32:04.028121 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 02:32:04.028166 kubelet[3173]: I0325 02:32:04.028146 3173 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 02:32:04.028166 kubelet[3173]: I0325 02:32:04.028160 3173 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 02:32:04.028219 kubelet[3173]: E0325 02:32:04.028203 3173 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 02:32:04.039007 kubelet[3173]: I0325 02:32:04.038992 3173 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 02:32:04.039007 kubelet[3173]: I0325 02:32:04.039002 3173 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 02:32:04.039007 kubelet[3173]: I0325 02:32:04.039013 3173 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:32:04.039110 kubelet[3173]: I0325 02:32:04.039097 3173 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 02:32:04.039128 kubelet[3173]: I0325 02:32:04.039103 3173 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 02:32:04.039128 kubelet[3173]: I0325 02:32:04.039116 3173 policy_none.go:49] "None policy: Start" Mar 25 02:32:04.039373 kubelet[3173]: I0325 02:32:04.039363 3173 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 02:32:04.039404 kubelet[3173]: I0325 02:32:04.039376 3173 state_mem.go:35] "Initializing new in-memory state store" Mar 25 02:32:04.039492 kubelet[3173]: I0325 02:32:04.039457 3173 state_mem.go:75] "Updated machine memory state" Mar 25 02:32:04.041506 kubelet[3173]: I0325 02:32:04.041495 3173 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 02:32:04.041603 kubelet[3173]: I0325 02:32:04.041597 3173 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 02:32:04.041625 kubelet[3173]: I0325 02:32:04.041606 3173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 02:32:04.041712 kubelet[3173]: I0325 02:32:04.041705 3173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 02:32:04.136761 kubelet[3173]: W0325 02:32:04.136691 3173 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:32:04.137121 kubelet[3173]: W0325 02:32:04.137059 3173 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:32:04.137372 kubelet[3173]: W0325 02:32:04.137306 3173 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:32:04.148978 kubelet[3173]: I0325 02:32:04.148915 3173 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.159744 kubelet[3173]: I0325 02:32:04.159670 3173 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.160021 kubelet[3173]: I0325 02:32:04.159867 3173 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.324990 kubelet[3173]: I0325 02:32:04.324741 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04f1efaf3c4d6479bc5da7cd7e49426a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" (UID: \"04f1efaf3c4d6479bc5da7cd7e49426a\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.324990 kubelet[3173]: I0325 02:32:04.324847 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.324990 kubelet[3173]: I0325 02:32:04.324907 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.324990 kubelet[3173]: I0325 02:32:04.324960 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.325593 kubelet[3173]: I0325 02:32:04.325007 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04f1efaf3c4d6479bc5da7cd7e49426a-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" (UID: \"04f1efaf3c4d6479bc5da7cd7e49426a\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.325593 kubelet[3173]: I0325 02:32:04.325057 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.325593 kubelet[3173]: I0325 02:32:04.325108 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c6adc9e27c9a8b8a90076342a596f25-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" (UID: \"6c6adc9e27c9a8b8a90076342a596f25\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.325593 kubelet[3173]: I0325 02:32:04.325159 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9daf12821b17fdf8981193fc8d53a01d-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-a-336c6dbb20\" (UID: \"9daf12821b17fdf8981193fc8d53a01d\") " pod="kube-system/kube-scheduler-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:04.325593 kubelet[3173]: I0325 02:32:04.325205 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04f1efaf3c4d6479bc5da7cd7e49426a-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" (UID: \"04f1efaf3c4d6479bc5da7cd7e49426a\") " pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:05.021174 kubelet[3173]: I0325 02:32:05.021070 3173 apiserver.go:52] "Watching apiserver" Mar 25 02:32:05.042157 kubelet[3173]: W0325 02:32:05.042085 3173 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:32:05.042388 kubelet[3173]: E0325 02:32:05.042221 3173 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4284.0.0-a-336c6dbb20\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:05.042598 kubelet[3173]: W0325 02:32:05.042552 3173 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:32:05.042764 kubelet[3173]: W0325 02:32:05.042559 3173 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:32:05.042764 kubelet[3173]: E0325 02:32:05.042688 3173 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-a-336c6dbb20\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:05.043071 kubelet[3173]: E0325 02:32:05.042820 3173 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-a-336c6dbb20\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:05.060644 kubelet[3173]: I0325 02:32:05.060560 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-a-336c6dbb20" podStartSLOduration=1.06054842 podStartE2EDuration="1.06054842s" podCreationTimestamp="2025-03-25 02:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:32:05.060534107 +0000 UTC m=+1.074039147" watchObservedRunningTime="2025-03-25 02:32:05.06054842 +0000 UTC m=+1.074053457" Mar 25 02:32:05.068016 kubelet[3173]: I0325 02:32:05.067978 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-a-336c6dbb20" podStartSLOduration=1.067968671 podStartE2EDuration="1.067968671s" podCreationTimestamp="2025-03-25 02:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:32:05.064075694 +0000 UTC m=+1.077580733" watchObservedRunningTime="2025-03-25 02:32:05.067968671 +0000 UTC m=+1.081473708" Mar 25 02:32:05.072290 kubelet[3173]: I0325 02:32:05.072271 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-a-336c6dbb20" podStartSLOduration=1.072237923 podStartE2EDuration="1.072237923s" podCreationTimestamp="2025-03-25 02:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:32:05.067960741 +0000 UTC m=+1.081465781" watchObservedRunningTime="2025-03-25 02:32:05.072237923 +0000 UTC m=+1.085742960" Mar 25 02:32:05.123174 kubelet[3173]: I0325 02:32:05.123121 3173 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 02:32:08.041053 kubelet[3173]: I0325 02:32:08.041001 3173 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 02:32:08.041317 containerd[1835]: time="2025-03-25T02:32:08.041154345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 02:32:08.041498 kubelet[3173]: I0325 02:32:08.041419 3173 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 02:32:08.399022 sudo[2132]: pam_unix(sudo:session): session closed for user root Mar 25 02:32:08.399703 sshd[2131]: Connection closed by 139.178.68.195 port 58602 Mar 25 02:32:08.399846 sshd-session[2128]: pam_unix(sshd:session): session closed for user core Mar 25 02:32:08.401339 systemd[1]: sshd@9-86.109.11.215:22-139.178.68.195:58602.service: Deactivated successfully. Mar 25 02:32:08.402365 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 02:32:08.402482 systemd[1]: session-11.scope: Consumed 3.021s CPU time, 225.6M memory peak. Mar 25 02:32:08.403376 systemd-logind[1815]: Session 11 logged out. Waiting for processes to exit. Mar 25 02:32:08.404105 systemd-logind[1815]: Removed session 11. Mar 25 02:32:09.061502 systemd[1]: Created slice kubepods-besteffort-podcd03d0c1_d113_4c9f_8b86_34c114abebe2.slice - libcontainer container kubepods-besteffort-podcd03d0c1_d113_4c9f_8b86_34c114abebe2.slice. Mar 25 02:32:09.159947 kubelet[3173]: I0325 02:32:09.159857 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd03d0c1-d113-4c9f-8b86-34c114abebe2-kube-proxy\") pod \"kube-proxy-x4rwc\" (UID: \"cd03d0c1-d113-4c9f-8b86-34c114abebe2\") " pod="kube-system/kube-proxy-x4rwc" Mar 25 02:32:09.160892 kubelet[3173]: I0325 02:32:09.159965 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4sqf\" (UniqueName: \"kubernetes.io/projected/cd03d0c1-d113-4c9f-8b86-34c114abebe2-kube-api-access-r4sqf\") pod \"kube-proxy-x4rwc\" (UID: \"cd03d0c1-d113-4c9f-8b86-34c114abebe2\") " pod="kube-system/kube-proxy-x4rwc" Mar 25 02:32:09.160892 kubelet[3173]: I0325 02:32:09.160032 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd03d0c1-d113-4c9f-8b86-34c114abebe2-xtables-lock\") pod \"kube-proxy-x4rwc\" (UID: \"cd03d0c1-d113-4c9f-8b86-34c114abebe2\") " pod="kube-system/kube-proxy-x4rwc" Mar 25 02:32:09.160892 kubelet[3173]: I0325 02:32:09.160085 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd03d0c1-d113-4c9f-8b86-34c114abebe2-lib-modules\") pod \"kube-proxy-x4rwc\" (UID: \"cd03d0c1-d113-4c9f-8b86-34c114abebe2\") " pod="kube-system/kube-proxy-x4rwc" Mar 25 02:32:09.214386 systemd[1]: Created slice kubepods-besteffort-podbf12e59b_40b0_4c1d_8244_9b5a042317ff.slice - libcontainer container kubepods-besteffort-podbf12e59b_40b0_4c1d_8244_9b5a042317ff.slice. Mar 25 02:32:09.260756 kubelet[3173]: I0325 02:32:09.260671 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q46rp\" (UniqueName: \"kubernetes.io/projected/bf12e59b-40b0-4c1d-8244-9b5a042317ff-kube-api-access-q46rp\") pod \"tigera-operator-64ff5465b7-8zh7b\" (UID: \"bf12e59b-40b0-4c1d-8244-9b5a042317ff\") " pod="tigera-operator/tigera-operator-64ff5465b7-8zh7b" Mar 25 02:32:09.261070 kubelet[3173]: I0325 02:32:09.260828 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf12e59b-40b0-4c1d-8244-9b5a042317ff-var-lib-calico\") pod \"tigera-operator-64ff5465b7-8zh7b\" (UID: \"bf12e59b-40b0-4c1d-8244-9b5a042317ff\") " pod="tigera-operator/tigera-operator-64ff5465b7-8zh7b" Mar 25 02:32:09.376577 containerd[1835]: time="2025-03-25T02:32:09.376487780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4rwc,Uid:cd03d0c1-d113-4c9f-8b86-34c114abebe2,Namespace:kube-system,Attempt:0,}" Mar 25 02:32:09.385618 containerd[1835]: time="2025-03-25T02:32:09.385557268Z" level=info msg="connecting to shim aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb" address="unix:///run/containerd/s/251eb1d4cbb2e17273f120e8a7c38d4b610d04bdf82e2d5b6c094593cfeb5440" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:09.409818 systemd[1]: Started cri-containerd-aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb.scope - libcontainer container aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb. Mar 25 02:32:09.461212 containerd[1835]: time="2025-03-25T02:32:09.461191707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4rwc,Uid:cd03d0c1-d113-4c9f-8b86-34c114abebe2,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb\"" Mar 25 02:32:09.462433 containerd[1835]: time="2025-03-25T02:32:09.462421867Z" level=info msg="CreateContainer within sandbox \"aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 02:32:09.466200 containerd[1835]: time="2025-03-25T02:32:09.466160751Z" level=info msg="Container dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:09.469745 containerd[1835]: time="2025-03-25T02:32:09.469704258Z" level=info msg="CreateContainer within sandbox \"aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5\"" Mar 25 02:32:09.470037 containerd[1835]: time="2025-03-25T02:32:09.469973819Z" level=info msg="StartContainer for \"dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5\"" Mar 25 02:32:09.470700 containerd[1835]: time="2025-03-25T02:32:09.470687297Z" level=info msg="connecting to shim dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5" address="unix:///run/containerd/s/251eb1d4cbb2e17273f120e8a7c38d4b610d04bdf82e2d5b6c094593cfeb5440" protocol=ttrpc version=3 Mar 25 02:32:09.485690 systemd[1]: Started cri-containerd-dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5.scope - libcontainer container dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5. Mar 25 02:32:09.509453 containerd[1835]: time="2025-03-25T02:32:09.509427631Z" level=info msg="StartContainer for \"dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5\" returns successfully" Mar 25 02:32:09.516294 containerd[1835]: time="2025-03-25T02:32:09.516265559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-8zh7b,Uid:bf12e59b-40b0-4c1d-8244-9b5a042317ff,Namespace:tigera-operator,Attempt:0,}" Mar 25 02:32:09.523622 containerd[1835]: time="2025-03-25T02:32:09.523562095Z" level=info msg="connecting to shim 153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999" address="unix:///run/containerd/s/c6532a05ff70f1c8cc57a1bd5aac12433b3037d12d03e6634b3f1851c0251918" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:09.540580 systemd[1]: Started cri-containerd-153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999.scope - libcontainer container 153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999. Mar 25 02:32:09.579357 containerd[1835]: time="2025-03-25T02:32:09.579332410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-8zh7b,Uid:bf12e59b-40b0-4c1d-8244-9b5a042317ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999\"" Mar 25 02:32:09.580097 containerd[1835]: time="2025-03-25T02:32:09.580082877Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 25 02:32:10.084739 kubelet[3173]: I0325 02:32:10.084588 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x4rwc" podStartSLOduration=1.084539265 podStartE2EDuration="1.084539265s" podCreationTimestamp="2025-03-25 02:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:32:10.067926638 +0000 UTC m=+6.081431798" watchObservedRunningTime="2025-03-25 02:32:10.084539265 +0000 UTC m=+6.098044355" Mar 25 02:32:13.824954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109323702.mount: Deactivated successfully. Mar 25 02:32:14.460285 containerd[1835]: time="2025-03-25T02:32:14.460260101Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:14.460520 containerd[1835]: time="2025-03-25T02:32:14.460407181Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 25 02:32:14.460820 containerd[1835]: time="2025-03-25T02:32:14.460806588Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:14.461717 containerd[1835]: time="2025-03-25T02:32:14.461704024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:14.462135 containerd[1835]: time="2025-03-25T02:32:14.462120735Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 4.882020718s" Mar 25 02:32:14.462161 containerd[1835]: time="2025-03-25T02:32:14.462138532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 25 02:32:14.463080 containerd[1835]: time="2025-03-25T02:32:14.463036394Z" level=info msg="CreateContainer within sandbox \"153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 25 02:32:14.465689 containerd[1835]: time="2025-03-25T02:32:14.465675646Z" level=info msg="Container 4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:14.468388 containerd[1835]: time="2025-03-25T02:32:14.468375660Z" level=info msg="CreateContainer within sandbox \"153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449\"" Mar 25 02:32:14.468656 containerd[1835]: time="2025-03-25T02:32:14.468605319Z" level=info msg="StartContainer for \"4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449\"" Mar 25 02:32:14.469000 containerd[1835]: time="2025-03-25T02:32:14.468986013Z" level=info msg="connecting to shim 4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449" address="unix:///run/containerd/s/c6532a05ff70f1c8cc57a1bd5aac12433b3037d12d03e6634b3f1851c0251918" protocol=ttrpc version=3 Mar 25 02:32:14.487594 systemd[1]: Started cri-containerd-4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449.scope - libcontainer container 4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449. Mar 25 02:32:14.500456 containerd[1835]: time="2025-03-25T02:32:14.500423058Z" level=info msg="StartContainer for \"4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449\" returns successfully" Mar 25 02:32:15.077046 kubelet[3173]: I0325 02:32:15.076965 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-8zh7b" podStartSLOduration=1.194357311 podStartE2EDuration="6.076956054s" podCreationTimestamp="2025-03-25 02:32:09 +0000 UTC" firstStartedPulling="2025-03-25 02:32:09.579879279 +0000 UTC m=+5.593384320" lastFinishedPulling="2025-03-25 02:32:14.462478023 +0000 UTC m=+10.475983063" observedRunningTime="2025-03-25 02:32:15.076923012 +0000 UTC m=+11.090428056" watchObservedRunningTime="2025-03-25 02:32:15.076956054 +0000 UTC m=+11.090461093" Mar 25 02:32:17.327006 systemd[1]: Created slice kubepods-besteffort-podaf05401b_bc4d_42d0_9332_75315ce6cb90.slice - libcontainer container kubepods-besteffort-podaf05401b_bc4d_42d0_9332_75315ce6cb90.slice. Mar 25 02:32:17.340952 systemd[1]: Created slice kubepods-besteffort-podddecaff0_a2f5_4e09_abe0_0dfecf4ec1ea.slice - libcontainer container kubepods-besteffort-podddecaff0_a2f5_4e09_abe0_0dfecf4ec1ea.slice. Mar 25 02:32:17.390714 update_engine[1820]: I20250325 02:32:17.390544 1820 update_attempter.cc:509] Updating boot flags... Mar 25 02:32:17.414260 kubelet[3173]: I0325 02:32:17.414237 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-tigera-ca-bundle\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414501 kubelet[3173]: I0325 02:32:17.414265 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-node-certs\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414501 kubelet[3173]: I0325 02:32:17.414278 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-var-run-calico\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414501 kubelet[3173]: I0325 02:32:17.414294 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-cni-log-dir\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414501 kubelet[3173]: I0325 02:32:17.414326 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bh95\" (UniqueName: \"kubernetes.io/projected/af05401b-bc4d-42d0-9332-75315ce6cb90-kube-api-access-5bh95\") pod \"calico-typha-7946456866-fhdpr\" (UID: \"af05401b-bc4d-42d0-9332-75315ce6cb90\") " pod="calico-system/calico-typha-7946456866-fhdpr" Mar 25 02:32:17.414501 kubelet[3173]: I0325 02:32:17.414344 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-cni-bin-dir\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414648 kubelet[3173]: I0325 02:32:17.414356 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5jd\" (UniqueName: \"kubernetes.io/projected/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-kube-api-access-kw5jd\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414648 kubelet[3173]: I0325 02:32:17.414369 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-xtables-lock\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414648 kubelet[3173]: I0325 02:32:17.414381 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-var-lib-calico\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414648 kubelet[3173]: I0325 02:32:17.414392 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-lib-modules\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414648 kubelet[3173]: I0325 02:32:17.414411 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-cni-net-dir\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414755 kubelet[3173]: I0325 02:32:17.414440 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af05401b-bc4d-42d0-9332-75315ce6cb90-tigera-ca-bundle\") pod \"calico-typha-7946456866-fhdpr\" (UID: \"af05401b-bc4d-42d0-9332-75315ce6cb90\") " pod="calico-system/calico-typha-7946456866-fhdpr" Mar 25 02:32:17.414755 kubelet[3173]: I0325 02:32:17.414469 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/af05401b-bc4d-42d0-9332-75315ce6cb90-typha-certs\") pod \"calico-typha-7946456866-fhdpr\" (UID: \"af05401b-bc4d-42d0-9332-75315ce6cb90\") " pod="calico-system/calico-typha-7946456866-fhdpr" Mar 25 02:32:17.414755 kubelet[3173]: I0325 02:32:17.414499 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-policysync\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.414755 kubelet[3173]: I0325 02:32:17.414516 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea-flexvol-driver-host\") pod \"calico-node-vbxw5\" (UID: \"ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea\") " pod="calico-system/calico-node-vbxw5" Mar 25 02:32:17.434420 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (3669) Mar 25 02:32:17.442662 kubelet[3173]: E0325 02:32:17.442556 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ck5hf" podUID="6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4" Mar 25 02:32:17.464413 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 39 scanned by (udev-worker) (3668) Mar 25 02:32:17.515671 kubelet[3173]: I0325 02:32:17.515622 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4-socket-dir\") pod \"csi-node-driver-ck5hf\" (UID: \"6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4\") " pod="calico-system/csi-node-driver-ck5hf" Mar 25 02:32:17.515671 kubelet[3173]: I0325 02:32:17.515643 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsgr\" (UniqueName: \"kubernetes.io/projected/6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4-kube-api-access-cxsgr\") pod \"csi-node-driver-ck5hf\" (UID: \"6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4\") " pod="calico-system/csi-node-driver-ck5hf" Mar 25 02:32:17.515671 kubelet[3173]: I0325 02:32:17.515668 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4-registration-dir\") pod \"csi-node-driver-ck5hf\" (UID: \"6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4\") " pod="calico-system/csi-node-driver-ck5hf" Mar 25 02:32:17.515836 kubelet[3173]: I0325 02:32:17.515723 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4-varrun\") pod \"csi-node-driver-ck5hf\" (UID: \"6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4\") " pod="calico-system/csi-node-driver-ck5hf" Mar 25 02:32:17.515836 kubelet[3173]: I0325 02:32:17.515768 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4-kubelet-dir\") pod \"csi-node-driver-ck5hf\" (UID: \"6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4\") " pod="calico-system/csi-node-driver-ck5hf" Mar 25 02:32:17.516372 kubelet[3173]: E0325 02:32:17.516361 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.516372 kubelet[3173]: W0325 02:32:17.516369 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.516453 kubelet[3173]: E0325 02:32:17.516379 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.516522 kubelet[3173]: E0325 02:32:17.516513 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.516522 kubelet[3173]: W0325 02:32:17.516520 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.516588 kubelet[3173]: E0325 02:32:17.516527 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.517442 kubelet[3173]: E0325 02:32:17.517409 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.517442 kubelet[3173]: W0325 02:32:17.517417 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.517442 kubelet[3173]: E0325 02:32:17.517426 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.517621 kubelet[3173]: E0325 02:32:17.517586 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.517621 kubelet[3173]: W0325 02:32:17.517592 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.517621 kubelet[3173]: E0325 02:32:17.517598 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.520033 kubelet[3173]: E0325 02:32:17.519999 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.520033 kubelet[3173]: W0325 02:32:17.520008 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.520033 kubelet[3173]: E0325 02:32:17.520016 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.520202 kubelet[3173]: E0325 02:32:17.520171 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.520202 kubelet[3173]: W0325 02:32:17.520176 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.520202 kubelet[3173]: E0325 02:32:17.520182 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.617044 kubelet[3173]: E0325 02:32:17.616859 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.617044 kubelet[3173]: W0325 02:32:17.616906 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.617044 kubelet[3173]: E0325 02:32:17.616959 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.617847 kubelet[3173]: E0325 02:32:17.617796 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.617847 kubelet[3173]: W0325 02:32:17.617834 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.618208 kubelet[3173]: E0325 02:32:17.617888 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.618626 kubelet[3173]: E0325 02:32:17.618531 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.618626 kubelet[3173]: W0325 02:32:17.618570 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.618626 kubelet[3173]: E0325 02:32:17.618616 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.619523 kubelet[3173]: E0325 02:32:17.619466 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.619523 kubelet[3173]: W0325 02:32:17.619519 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.619884 kubelet[3173]: E0325 02:32:17.619580 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.620213 kubelet[3173]: E0325 02:32:17.620157 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.620213 kubelet[3173]: W0325 02:32:17.620198 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.620614 kubelet[3173]: E0325 02:32:17.620245 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.620878 kubelet[3173]: E0325 02:32:17.620825 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.620878 kubelet[3173]: W0325 02:32:17.620864 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.621114 kubelet[3173]: E0325 02:32:17.620987 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.621507 kubelet[3173]: E0325 02:32:17.621467 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.621507 kubelet[3173]: W0325 02:32:17.621499 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.621727 kubelet[3173]: E0325 02:32:17.621616 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.622090 kubelet[3173]: E0325 02:32:17.622048 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.622090 kubelet[3173]: W0325 02:32:17.622086 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.622304 kubelet[3173]: E0325 02:32:17.622203 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.622760 kubelet[3173]: E0325 02:32:17.622674 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.622760 kubelet[3173]: W0325 02:32:17.622710 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.623052 kubelet[3173]: E0325 02:32:17.622827 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.623266 kubelet[3173]: E0325 02:32:17.623195 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.623266 kubelet[3173]: W0325 02:32:17.623222 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.623590 kubelet[3173]: E0325 02:32:17.623297 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.623734 kubelet[3173]: E0325 02:32:17.623687 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.623734 kubelet[3173]: W0325 02:32:17.623712 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.623922 kubelet[3173]: E0325 02:32:17.623820 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.624194 kubelet[3173]: E0325 02:32:17.624122 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.624194 kubelet[3173]: W0325 02:32:17.624149 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.624484 kubelet[3173]: E0325 02:32:17.624223 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.624786 kubelet[3173]: E0325 02:32:17.624711 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.624786 kubelet[3173]: W0325 02:32:17.624747 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.625079 kubelet[3173]: E0325 02:32:17.624819 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.625286 kubelet[3173]: E0325 02:32:17.625229 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.625286 kubelet[3173]: W0325 02:32:17.625253 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.625544 kubelet[3173]: E0325 02:32:17.625367 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.625898 kubelet[3173]: E0325 02:32:17.625824 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.625898 kubelet[3173]: W0325 02:32:17.625851 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.626178 kubelet[3173]: E0325 02:32:17.625966 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.626461 kubelet[3173]: E0325 02:32:17.626393 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.626461 kubelet[3173]: W0325 02:32:17.626456 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.626679 kubelet[3173]: E0325 02:32:17.626536 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.627083 kubelet[3173]: E0325 02:32:17.626997 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.627083 kubelet[3173]: W0325 02:32:17.627035 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.627353 kubelet[3173]: E0325 02:32:17.627160 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.627747 kubelet[3173]: E0325 02:32:17.627659 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.627747 kubelet[3173]: W0325 02:32:17.627695 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.627997 kubelet[3173]: E0325 02:32:17.627819 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.628324 kubelet[3173]: E0325 02:32:17.628246 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.628324 kubelet[3173]: W0325 02:32:17.628274 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.628630 kubelet[3173]: E0325 02:32:17.628384 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.628905 kubelet[3173]: E0325 02:32:17.628823 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.628905 kubelet[3173]: W0325 02:32:17.628860 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.629176 kubelet[3173]: E0325 02:32:17.628977 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.629424 kubelet[3173]: E0325 02:32:17.629376 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.629424 kubelet[3173]: W0325 02:32:17.629418 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.629627 kubelet[3173]: E0325 02:32:17.629545 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.630110 kubelet[3173]: E0325 02:32:17.630028 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.630110 kubelet[3173]: W0325 02:32:17.630065 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.630110 kubelet[3173]: E0325 02:32:17.630107 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.630804 kubelet[3173]: E0325 02:32:17.630731 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.630804 kubelet[3173]: W0325 02:32:17.630762 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.630804 kubelet[3173]: E0325 02:32:17.630801 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.631508 containerd[1835]: time="2025-03-25T02:32:17.631381301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7946456866-fhdpr,Uid:af05401b-bc4d-42d0-9332-75315ce6cb90,Namespace:calico-system,Attempt:0,}" Mar 25 02:32:17.632226 kubelet[3173]: E0325 02:32:17.631512 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.632226 kubelet[3173]: W0325 02:32:17.631556 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.632226 kubelet[3173]: E0325 02:32:17.631613 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.632496 kubelet[3173]: E0325 02:32:17.632451 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.632496 kubelet[3173]: W0325 02:32:17.632465 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.632496 kubelet[3173]: E0325 02:32:17.632482 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.636667 kubelet[3173]: E0325 02:32:17.636655 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:32:17.636667 kubelet[3173]: W0325 02:32:17.636664 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:32:17.636730 kubelet[3173]: E0325 02:32:17.636674 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:32:17.639885 containerd[1835]: time="2025-03-25T02:32:17.639835107Z" level=info msg="connecting to shim 0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38" address="unix:///run/containerd/s/a14ab76be22c329c93e1a6ea19fa26792940b0e735801830559395bfe5ff1717" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:17.643047 containerd[1835]: time="2025-03-25T02:32:17.643027589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vbxw5,Uid:ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea,Namespace:calico-system,Attempt:0,}" Mar 25 02:32:17.649227 containerd[1835]: time="2025-03-25T02:32:17.649203507Z" level=info msg="connecting to shim 4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f" address="unix:///run/containerd/s/e9969381c9b15da4506350ddd3f1cc8c19b674723fb9fe8b697456d46e617ea4" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:17.670752 systemd[1]: Started cri-containerd-0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38.scope - libcontainer container 0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38. Mar 25 02:32:17.672663 systemd[1]: Started cri-containerd-4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f.scope - libcontainer container 4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f. Mar 25 02:32:17.684925 containerd[1835]: time="2025-03-25T02:32:17.684870757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vbxw5,Uid:ddecaff0-a2f5-4e09-abe0-0dfecf4ec1ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f\"" Mar 25 02:32:17.685581 containerd[1835]: time="2025-03-25T02:32:17.685567325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 25 02:32:17.699111 containerd[1835]: time="2025-03-25T02:32:17.699090182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7946456866-fhdpr,Uid:af05401b-bc4d-42d0-9332-75315ce6cb90,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38\"" Mar 25 02:32:19.029205 kubelet[3173]: E0325 02:32:19.029060 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ck5hf" podUID="6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4" Mar 25 02:32:19.335518 containerd[1835]: time="2025-03-25T02:32:19.335451963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:19.335701 containerd[1835]: time="2025-03-25T02:32:19.335634006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 25 02:32:19.335934 containerd[1835]: time="2025-03-25T02:32:19.335893670Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:19.336881 containerd[1835]: time="2025-03-25T02:32:19.336834688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:19.337252 containerd[1835]: time="2025-03-25T02:32:19.337212269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.651626782s" Mar 25 02:32:19.337252 containerd[1835]: time="2025-03-25T02:32:19.337225560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 25 02:32:19.337732 containerd[1835]: time="2025-03-25T02:32:19.337692668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 25 02:32:19.338186 containerd[1835]: time="2025-03-25T02:32:19.338144714Z" level=info msg="CreateContainer within sandbox \"4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 02:32:19.341500 containerd[1835]: time="2025-03-25T02:32:19.341484795Z" level=info msg="Container 427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:19.345716 containerd[1835]: time="2025-03-25T02:32:19.345672723Z" level=info msg="CreateContainer within sandbox \"4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b\"" Mar 25 02:32:19.345982 containerd[1835]: time="2025-03-25T02:32:19.345937335Z" level=info msg="StartContainer for \"427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b\"" Mar 25 02:32:19.346718 containerd[1835]: time="2025-03-25T02:32:19.346676317Z" level=info msg="connecting to shim 427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b" address="unix:///run/containerd/s/e9969381c9b15da4506350ddd3f1cc8c19b674723fb9fe8b697456d46e617ea4" protocol=ttrpc version=3 Mar 25 02:32:19.372583 systemd[1]: Started cri-containerd-427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b.scope - libcontainer container 427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b. Mar 25 02:32:19.394713 containerd[1835]: time="2025-03-25T02:32:19.394687955Z" level=info msg="StartContainer for \"427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b\" returns successfully" Mar 25 02:32:19.399902 systemd[1]: cri-containerd-427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b.scope: Deactivated successfully. Mar 25 02:32:19.401199 containerd[1835]: time="2025-03-25T02:32:19.401178336Z" level=info msg="received exit event container_id:\"427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b\" id:\"427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b\" pid:3822 exited_at:{seconds:1742869939 nanos:400901494}" Mar 25 02:32:19.401256 containerd[1835]: time="2025-03-25T02:32:19.401224380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b\" id:\"427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b\" pid:3822 exited_at:{seconds:1742869939 nanos:400901494}" Mar 25 02:32:19.413926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b-rootfs.mount: Deactivated successfully. Mar 25 02:32:21.029084 kubelet[3173]: E0325 02:32:21.029022 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ck5hf" podUID="6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4" Mar 25 02:32:21.595582 containerd[1835]: time="2025-03-25T02:32:21.595529972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:21.595838 containerd[1835]: time="2025-03-25T02:32:21.595777178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 25 02:32:21.596156 containerd[1835]: time="2025-03-25T02:32:21.596112279Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:21.596890 containerd[1835]: time="2025-03-25T02:32:21.596851579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:21.597224 containerd[1835]: time="2025-03-25T02:32:21.597186045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 2.259479167s" Mar 25 02:32:21.597224 containerd[1835]: time="2025-03-25T02:32:21.597197780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 25 02:32:21.597747 containerd[1835]: time="2025-03-25T02:32:21.597707950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 25 02:32:21.600561 containerd[1835]: time="2025-03-25T02:32:21.600544700Z" level=info msg="CreateContainer within sandbox \"0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 25 02:32:21.620579 containerd[1835]: time="2025-03-25T02:32:21.620538754Z" level=info msg="Container 5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:21.623840 containerd[1835]: time="2025-03-25T02:32:21.623826709Z" level=info msg="CreateContainer within sandbox \"0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d\"" Mar 25 02:32:21.624061 containerd[1835]: time="2025-03-25T02:32:21.624046868Z" level=info msg="StartContainer for \"5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d\"" Mar 25 02:32:21.624600 containerd[1835]: time="2025-03-25T02:32:21.624559617Z" level=info msg="connecting to shim 5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d" address="unix:///run/containerd/s/a14ab76be22c329c93e1a6ea19fa26792940b0e735801830559395bfe5ff1717" protocol=ttrpc version=3 Mar 25 02:32:21.641700 systemd[1]: Started cri-containerd-5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d.scope - libcontainer container 5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d. Mar 25 02:32:21.675149 containerd[1835]: time="2025-03-25T02:32:21.675124234Z" level=info msg="StartContainer for \"5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d\" returns successfully" Mar 25 02:32:23.029117 kubelet[3173]: E0325 02:32:23.029070 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ck5hf" podUID="6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4" Mar 25 02:32:23.092056 kubelet[3173]: I0325 02:32:23.091960 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:32:25.029162 kubelet[3173]: E0325 02:32:25.029131 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ck5hf" podUID="6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4" Mar 25 02:32:25.374477 containerd[1835]: time="2025-03-25T02:32:25.374455163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:25.374733 containerd[1835]: time="2025-03-25T02:32:25.374710300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 25 02:32:25.375100 containerd[1835]: time="2025-03-25T02:32:25.375088850Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:25.375930 containerd[1835]: time="2025-03-25T02:32:25.375914785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:25.376301 containerd[1835]: time="2025-03-25T02:32:25.376286996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 3.778563811s" Mar 25 02:32:25.376332 containerd[1835]: time="2025-03-25T02:32:25.376302648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 25 02:32:25.377080 containerd[1835]: time="2025-03-25T02:32:25.377066605Z" level=info msg="CreateContainer within sandbox \"4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 02:32:25.380225 containerd[1835]: time="2025-03-25T02:32:25.380184769Z" level=info msg="Container 642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:25.384412 containerd[1835]: time="2025-03-25T02:32:25.384390183Z" level=info msg="CreateContainer within sandbox \"4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03\"" Mar 25 02:32:25.384621 containerd[1835]: time="2025-03-25T02:32:25.384609708Z" level=info msg="StartContainer for \"642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03\"" Mar 25 02:32:25.385368 containerd[1835]: time="2025-03-25T02:32:25.385355633Z" level=info msg="connecting to shim 642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03" address="unix:///run/containerd/s/e9969381c9b15da4506350ddd3f1cc8c19b674723fb9fe8b697456d46e617ea4" protocol=ttrpc version=3 Mar 25 02:32:25.400696 systemd[1]: Started cri-containerd-642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03.scope - libcontainer container 642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03. Mar 25 02:32:25.421204 containerd[1835]: time="2025-03-25T02:32:25.421181785Z" level=info msg="StartContainer for \"642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03\" returns successfully" Mar 25 02:32:25.913784 systemd[1]: cri-containerd-642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03.scope: Deactivated successfully. Mar 25 02:32:25.913951 systemd[1]: cri-containerd-642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03.scope: Consumed 308ms CPU time, 175.8M memory peak, 154M written to disk. Mar 25 02:32:25.914665 containerd[1835]: time="2025-03-25T02:32:25.914644080Z" level=info msg="received exit event container_id:\"642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03\" id:\"642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03\" pid:3934 exited_at:{seconds:1742869945 nanos:914521766}" Mar 25 02:32:25.914761 containerd[1835]: time="2025-03-25T02:32:25.914707695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03\" id:\"642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03\" pid:3934 exited_at:{seconds:1742869945 nanos:914521766}" Mar 25 02:32:25.926057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03-rootfs.mount: Deactivated successfully. Mar 25 02:32:25.989214 kubelet[3173]: I0325 02:32:25.989153 3173 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 25 02:32:26.021413 kubelet[3173]: I0325 02:32:26.021353 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7946456866-fhdpr" podStartSLOduration=5.123307801 podStartE2EDuration="9.021337064s" podCreationTimestamp="2025-03-25 02:32:17 +0000 UTC" firstStartedPulling="2025-03-25 02:32:17.699620318 +0000 UTC m=+13.713125362" lastFinishedPulling="2025-03-25 02:32:21.597649584 +0000 UTC m=+17.611154625" observedRunningTime="2025-03-25 02:32:22.106655989 +0000 UTC m=+18.120161083" watchObservedRunningTime="2025-03-25 02:32:26.021337064 +0000 UTC m=+22.034842101" Mar 25 02:32:26.025305 systemd[1]: Created slice kubepods-besteffort-pod569a735e_7aaa_4a7c_a997_92c66cb6394c.slice - libcontainer container kubepods-besteffort-pod569a735e_7aaa_4a7c_a997_92c66cb6394c.slice. Mar 25 02:32:26.028416 systemd[1]: Created slice kubepods-burstable-podcabb4ada_1163_4bc2_be75_75e2593d4d12.slice - libcontainer container kubepods-burstable-podcabb4ada_1163_4bc2_be75_75e2593d4d12.slice. Mar 25 02:32:26.031978 systemd[1]: Created slice kubepods-burstable-podf7efc5ee_b24b_4101_90f3_caf1b6873d50.slice - libcontainer container kubepods-burstable-podf7efc5ee_b24b_4101_90f3_caf1b6873d50.slice. Mar 25 02:32:26.034501 systemd[1]: Created slice kubepods-besteffort-podd5311d0f_0fd6_4d52_b978_ae321b94d4eb.slice - libcontainer container kubepods-besteffort-podd5311d0f_0fd6_4d52_b978_ae321b94d4eb.slice. Mar 25 02:32:26.036842 systemd[1]: Created slice kubepods-besteffort-pod99b09400_cf3a_44cf_b382_b27528bc0bb8.slice - libcontainer container kubepods-besteffort-pod99b09400_cf3a_44cf_b382_b27528bc0bb8.slice. Mar 25 02:32:26.078586 kubelet[3173]: I0325 02:32:26.078505 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2phgg\" (UniqueName: \"kubernetes.io/projected/569a735e-7aaa-4a7c-a997-92c66cb6394c-kube-api-access-2phgg\") pod \"calico-kube-controllers-699bd55f66-nb546\" (UID: \"569a735e-7aaa-4a7c-a997-92c66cb6394c\") " pod="calico-system/calico-kube-controllers-699bd55f66-nb546" Mar 25 02:32:26.078586 kubelet[3173]: I0325 02:32:26.078548 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twnfb\" (UniqueName: \"kubernetes.io/projected/cabb4ada-1163-4bc2-be75-75e2593d4d12-kube-api-access-twnfb\") pod \"coredns-6f6b679f8f-fvw5k\" (UID: \"cabb4ada-1163-4bc2-be75-75e2593d4d12\") " pod="kube-system/coredns-6f6b679f8f-fvw5k" Mar 25 02:32:26.078586 kubelet[3173]: I0325 02:32:26.078566 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/569a735e-7aaa-4a7c-a997-92c66cb6394c-tigera-ca-bundle\") pod \"calico-kube-controllers-699bd55f66-nb546\" (UID: \"569a735e-7aaa-4a7c-a997-92c66cb6394c\") " pod="calico-system/calico-kube-controllers-699bd55f66-nb546" Mar 25 02:32:26.078586 kubelet[3173]: I0325 02:32:26.078579 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9bq\" (UniqueName: \"kubernetes.io/projected/f7efc5ee-b24b-4101-90f3-caf1b6873d50-kube-api-access-9z9bq\") pod \"coredns-6f6b679f8f-9lr4z\" (UID: \"f7efc5ee-b24b-4101-90f3-caf1b6873d50\") " pod="kube-system/coredns-6f6b679f8f-9lr4z" Mar 25 02:32:26.078586 kubelet[3173]: I0325 02:32:26.078597 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45zt\" (UniqueName: \"kubernetes.io/projected/d5311d0f-0fd6-4d52-b978-ae321b94d4eb-kube-api-access-z45zt\") pod \"calico-apiserver-f4c6448c9-5k2s9\" (UID: \"d5311d0f-0fd6-4d52-b978-ae321b94d4eb\") " pod="calico-apiserver/calico-apiserver-f4c6448c9-5k2s9" Mar 25 02:32:26.087916 kubelet[3173]: I0325 02:32:26.078611 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99b09400-cf3a-44cf-b382-b27528bc0bb8-calico-apiserver-certs\") pod \"calico-apiserver-f4c6448c9-8cf5b\" (UID: \"99b09400-cf3a-44cf-b382-b27528bc0bb8\") " pod="calico-apiserver/calico-apiserver-f4c6448c9-8cf5b" Mar 25 02:32:26.087916 kubelet[3173]: I0325 02:32:26.078626 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cabb4ada-1163-4bc2-be75-75e2593d4d12-config-volume\") pod \"coredns-6f6b679f8f-fvw5k\" (UID: \"cabb4ada-1163-4bc2-be75-75e2593d4d12\") " pod="kube-system/coredns-6f6b679f8f-fvw5k" Mar 25 02:32:26.087916 kubelet[3173]: I0325 02:32:26.078641 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5311d0f-0fd6-4d52-b978-ae321b94d4eb-calico-apiserver-certs\") pod \"calico-apiserver-f4c6448c9-5k2s9\" (UID: \"d5311d0f-0fd6-4d52-b978-ae321b94d4eb\") " pod="calico-apiserver/calico-apiserver-f4c6448c9-5k2s9" Mar 25 02:32:26.087916 kubelet[3173]: I0325 02:32:26.078655 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7efc5ee-b24b-4101-90f3-caf1b6873d50-config-volume\") pod \"coredns-6f6b679f8f-9lr4z\" (UID: \"f7efc5ee-b24b-4101-90f3-caf1b6873d50\") " pod="kube-system/coredns-6f6b679f8f-9lr4z" Mar 25 02:32:26.087916 kubelet[3173]: I0325 02:32:26.078667 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7pmw\" (UniqueName: \"kubernetes.io/projected/99b09400-cf3a-44cf-b382-b27528bc0bb8-kube-api-access-t7pmw\") pod \"calico-apiserver-f4c6448c9-8cf5b\" (UID: \"99b09400-cf3a-44cf-b382-b27528bc0bb8\") " pod="calico-apiserver/calico-apiserver-f4c6448c9-8cf5b" Mar 25 02:32:26.328336 containerd[1835]: time="2025-03-25T02:32:26.328217880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699bd55f66-nb546,Uid:569a735e-7aaa-4a7c-a997-92c66cb6394c,Namespace:calico-system,Attempt:0,}" Mar 25 02:32:26.332633 containerd[1835]: time="2025-03-25T02:32:26.332523812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fvw5k,Uid:cabb4ada-1163-4bc2-be75-75e2593d4d12,Namespace:kube-system,Attempt:0,}" Mar 25 02:32:26.334758 containerd[1835]: time="2025-03-25T02:32:26.334674345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9lr4z,Uid:f7efc5ee-b24b-4101-90f3-caf1b6873d50,Namespace:kube-system,Attempt:0,}" Mar 25 02:32:26.337000 containerd[1835]: time="2025-03-25T02:32:26.336920119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-5k2s9,Uid:d5311d0f-0fd6-4d52-b978-ae321b94d4eb,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:32:26.339207 containerd[1835]: time="2025-03-25T02:32:26.339126460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-8cf5b,Uid:99b09400-cf3a-44cf-b382-b27528bc0bb8,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:32:26.611361 containerd[1835]: time="2025-03-25T02:32:26.611273839Z" level=error msg="Failed to destroy network for sandbox \"659c23a00580529018440633a25e56aa80d7a4d46224fb2367248968cc6f659d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.611765 containerd[1835]: time="2025-03-25T02:32:26.611744845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fvw5k,Uid:cabb4ada-1163-4bc2-be75-75e2593d4d12,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"659c23a00580529018440633a25e56aa80d7a4d46224fb2367248968cc6f659d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.611938 kubelet[3173]: E0325 02:32:26.611901 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659c23a00580529018440633a25e56aa80d7a4d46224fb2367248968cc6f659d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.611989 kubelet[3173]: E0325 02:32:26.611971 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659c23a00580529018440633a25e56aa80d7a4d46224fb2367248968cc6f659d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fvw5k" Mar 25 02:32:26.612103 kubelet[3173]: E0325 02:32:26.611991 3173 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659c23a00580529018440633a25e56aa80d7a4d46224fb2367248968cc6f659d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fvw5k" Mar 25 02:32:26.612103 kubelet[3173]: E0325 02:32:26.612034 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fvw5k_kube-system(cabb4ada-1163-4bc2-be75-75e2593d4d12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fvw5k_kube-system(cabb4ada-1163-4bc2-be75-75e2593d4d12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"659c23a00580529018440633a25e56aa80d7a4d46224fb2367248968cc6f659d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fvw5k" podUID="cabb4ada-1163-4bc2-be75-75e2593d4d12" Mar 25 02:32:26.613027 systemd[1]: run-netns-cni\x2dee556cb3\x2d375e\x2d24d5\x2d07b4\x2d9ed1354ebd47.mount: Deactivated successfully. Mar 25 02:32:26.613568 containerd[1835]: time="2025-03-25T02:32:26.613545770Z" level=error msg="Failed to destroy network for sandbox \"7118702293f18d5b930a4c72919d3c0c5667d61830676f454f42ce6ca49f0cbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.613605 containerd[1835]: time="2025-03-25T02:32:26.613547643Z" level=error msg="Failed to destroy network for sandbox \"506a7f4aaa3f10a0c6c10f7b54daee62fcd3c16eb417b319868e53c01a6231ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.613918 containerd[1835]: time="2025-03-25T02:32:26.613903927Z" level=error msg="Failed to destroy network for sandbox \"19939b5f02ff39b574039fa126f9d5fb1e2f00719e887ba10541d23c5dd51155\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.613994 containerd[1835]: time="2025-03-25T02:32:26.613975971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-5k2s9,Uid:d5311d0f-0fd6-4d52-b978-ae321b94d4eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7118702293f18d5b930a4c72919d3c0c5667d61830676f454f42ce6ca49f0cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.614100 kubelet[3173]: E0325 02:32:26.614082 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7118702293f18d5b930a4c72919d3c0c5667d61830676f454f42ce6ca49f0cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.614129 kubelet[3173]: E0325 02:32:26.614121 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7118702293f18d5b930a4c72919d3c0c5667d61830676f454f42ce6ca49f0cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4c6448c9-5k2s9" Mar 25 02:32:26.614148 kubelet[3173]: E0325 02:32:26.614134 3173 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7118702293f18d5b930a4c72919d3c0c5667d61830676f454f42ce6ca49f0cbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4c6448c9-5k2s9" Mar 25 02:32:26.614178 kubelet[3173]: E0325 02:32:26.614158 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4c6448c9-5k2s9_calico-apiserver(d5311d0f-0fd6-4d52-b978-ae321b94d4eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4c6448c9-5k2s9_calico-apiserver(d5311d0f-0fd6-4d52-b978-ae321b94d4eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7118702293f18d5b930a4c72919d3c0c5667d61830676f454f42ce6ca49f0cbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4c6448c9-5k2s9" podUID="d5311d0f-0fd6-4d52-b978-ae321b94d4eb" Mar 25 02:32:26.614269 containerd[1835]: time="2025-03-25T02:32:26.614248987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9lr4z,Uid:f7efc5ee-b24b-4101-90f3-caf1b6873d50,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"506a7f4aaa3f10a0c6c10f7b54daee62fcd3c16eb417b319868e53c01a6231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.614342 kubelet[3173]: E0325 02:32:26.614330 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"506a7f4aaa3f10a0c6c10f7b54daee62fcd3c16eb417b319868e53c01a6231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.614366 kubelet[3173]: E0325 02:32:26.614351 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"506a7f4aaa3f10a0c6c10f7b54daee62fcd3c16eb417b319868e53c01a6231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9lr4z" Mar 25 02:32:26.614366 kubelet[3173]: E0325 02:32:26.614361 3173 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"506a7f4aaa3f10a0c6c10f7b54daee62fcd3c16eb417b319868e53c01a6231ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9lr4z" Mar 25 02:32:26.614410 kubelet[3173]: E0325 02:32:26.614380 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9lr4z_kube-system(f7efc5ee-b24b-4101-90f3-caf1b6873d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9lr4z_kube-system(f7efc5ee-b24b-4101-90f3-caf1b6873d50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"506a7f4aaa3f10a0c6c10f7b54daee62fcd3c16eb417b319868e53c01a6231ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9lr4z" podUID="f7efc5ee-b24b-4101-90f3-caf1b6873d50" Mar 25 02:32:26.614517 containerd[1835]: time="2025-03-25T02:32:26.614502943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699bd55f66-nb546,Uid:569a735e-7aaa-4a7c-a997-92c66cb6394c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19939b5f02ff39b574039fa126f9d5fb1e2f00719e887ba10541d23c5dd51155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.614574 kubelet[3173]: E0325 02:32:26.614564 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19939b5f02ff39b574039fa126f9d5fb1e2f00719e887ba10541d23c5dd51155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.614596 kubelet[3173]: E0325 02:32:26.614581 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19939b5f02ff39b574039fa126f9d5fb1e2f00719e887ba10541d23c5dd51155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699bd55f66-nb546" Mar 25 02:32:26.614596 kubelet[3173]: E0325 02:32:26.614591 3173 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19939b5f02ff39b574039fa126f9d5fb1e2f00719e887ba10541d23c5dd51155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699bd55f66-nb546" Mar 25 02:32:26.614637 kubelet[3173]: E0325 02:32:26.614615 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-699bd55f66-nb546_calico-system(569a735e-7aaa-4a7c-a997-92c66cb6394c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-699bd55f66-nb546_calico-system(569a735e-7aaa-4a7c-a997-92c66cb6394c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19939b5f02ff39b574039fa126f9d5fb1e2f00719e887ba10541d23c5dd51155\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699bd55f66-nb546" podUID="569a735e-7aaa-4a7c-a997-92c66cb6394c" Mar 25 02:32:26.616137 containerd[1835]: time="2025-03-25T02:32:26.616121336Z" level=error msg="Failed to destroy network for sandbox \"32c12fe0a7432faec5e2c6d4d2e88ecc3a4c20b35230634c5b1ccb3bcbfc6390\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.616603 containerd[1835]: time="2025-03-25T02:32:26.616559238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-8cf5b,Uid:99b09400-cf3a-44cf-b382-b27528bc0bb8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32c12fe0a7432faec5e2c6d4d2e88ecc3a4c20b35230634c5b1ccb3bcbfc6390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.616741 kubelet[3173]: E0325 02:32:26.616685 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32c12fe0a7432faec5e2c6d4d2e88ecc3a4c20b35230634c5b1ccb3bcbfc6390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:26.616741 kubelet[3173]: E0325 02:32:26.616719 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32c12fe0a7432faec5e2c6d4d2e88ecc3a4c20b35230634c5b1ccb3bcbfc6390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4c6448c9-8cf5b" Mar 25 02:32:26.616741 kubelet[3173]: E0325 02:32:26.616727 3173 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32c12fe0a7432faec5e2c6d4d2e88ecc3a4c20b35230634c5b1ccb3bcbfc6390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4c6448c9-8cf5b" Mar 25 02:32:26.616852 kubelet[3173]: E0325 02:32:26.616766 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4c6448c9-8cf5b_calico-apiserver(99b09400-cf3a-44cf-b382-b27528bc0bb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4c6448c9-8cf5b_calico-apiserver(99b09400-cf3a-44cf-b382-b27528bc0bb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32c12fe0a7432faec5e2c6d4d2e88ecc3a4c20b35230634c5b1ccb3bcbfc6390\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4c6448c9-8cf5b" podUID="99b09400-cf3a-44cf-b382-b27528bc0bb8" Mar 25 02:32:27.044729 systemd[1]: Created slice kubepods-besteffort-pod6bb6fa2a_3181_4132_9b3b_a6ba0557ddd4.slice - libcontainer container kubepods-besteffort-pod6bb6fa2a_3181_4132_9b3b_a6ba0557ddd4.slice. Mar 25 02:32:27.050158 containerd[1835]: time="2025-03-25T02:32:27.050042913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ck5hf,Uid:6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4,Namespace:calico-system,Attempt:0,}" Mar 25 02:32:27.077695 containerd[1835]: time="2025-03-25T02:32:27.077665322Z" level=error msg="Failed to destroy network for sandbox \"526c8f62ee566653fed6e52925ccfb494c394ce3280af298d949724c4ee74414\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:27.078248 containerd[1835]: time="2025-03-25T02:32:27.078164618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ck5hf,Uid:6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"526c8f62ee566653fed6e52925ccfb494c394ce3280af298d949724c4ee74414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:27.078363 kubelet[3173]: E0325 02:32:27.078344 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"526c8f62ee566653fed6e52925ccfb494c394ce3280af298d949724c4ee74414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:32:27.078392 kubelet[3173]: E0325 02:32:27.078381 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"526c8f62ee566653fed6e52925ccfb494c394ce3280af298d949724c4ee74414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ck5hf" Mar 25 02:32:27.078459 kubelet[3173]: E0325 02:32:27.078394 3173 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"526c8f62ee566653fed6e52925ccfb494c394ce3280af298d949724c4ee74414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ck5hf" Mar 25 02:32:27.078479 kubelet[3173]: E0325 02:32:27.078460 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ck5hf_calico-system(6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ck5hf_calico-system(6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"526c8f62ee566653fed6e52925ccfb494c394ce3280af298d949724c4ee74414\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ck5hf" podUID="6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4" Mar 25 02:32:27.111593 containerd[1835]: time="2025-03-25T02:32:27.111520018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 25 02:32:27.388534 systemd[1]: run-netns-cni\x2dff12e21d\x2d13d4\x2d1fe4\x2d5ddd\x2dd4cc8a5b4fe3.mount: Deactivated successfully. Mar 25 02:32:27.388586 systemd[1]: run-netns-cni\x2da85a84e4\x2d392e\x2d5463\x2dfb2e\x2dc9a1843b0aeb.mount: Deactivated successfully. Mar 25 02:32:27.388627 systemd[1]: run-netns-cni\x2dee328121\x2d2d9a\x2d7b10\x2d4d67\x2d1be4114fbdeb.mount: Deactivated successfully. Mar 25 02:32:27.388664 systemd[1]: run-netns-cni\x2da2ced966\x2dd05b\x2dded5\x2d1eed\x2d8c917e5780aa.mount: Deactivated successfully. Mar 25 02:32:31.963887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333758867.mount: Deactivated successfully. Mar 25 02:32:31.980466 containerd[1835]: time="2025-03-25T02:32:31.980418873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:31.980687 containerd[1835]: time="2025-03-25T02:32:31.980628400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 25 02:32:31.980982 containerd[1835]: time="2025-03-25T02:32:31.980940812Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:31.981704 containerd[1835]: time="2025-03-25T02:32:31.981663783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:31.982255 containerd[1835]: time="2025-03-25T02:32:31.982216495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 4.870628535s" Mar 25 02:32:31.982255 containerd[1835]: time="2025-03-25T02:32:31.982230853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 25 02:32:31.985486 containerd[1835]: time="2025-03-25T02:32:31.985471298Z" level=info msg="CreateContainer within sandbox \"4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 02:32:31.989556 containerd[1835]: time="2025-03-25T02:32:31.989540125Z" level=info msg="Container 800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:31.994939 containerd[1835]: time="2025-03-25T02:32:31.994894226Z" level=info msg="CreateContainer within sandbox \"4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\"" Mar 25 02:32:31.995169 containerd[1835]: time="2025-03-25T02:32:31.995125162Z" level=info msg="StartContainer for \"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\"" Mar 25 02:32:31.995916 containerd[1835]: time="2025-03-25T02:32:31.995904258Z" level=info msg="connecting to shim 800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c" address="unix:///run/containerd/s/e9969381c9b15da4506350ddd3f1cc8c19b674723fb9fe8b697456d46e617ea4" protocol=ttrpc version=3 Mar 25 02:32:32.017551 systemd[1]: Started cri-containerd-800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c.scope - libcontainer container 800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c. Mar 25 02:32:32.041665 containerd[1835]: time="2025-03-25T02:32:32.041597587Z" level=info msg="StartContainer for \"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" returns successfully" Mar 25 02:32:32.101326 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 25 02:32:32.101373 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 25 02:32:32.170111 kubelet[3173]: I0325 02:32:32.170030 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vbxw5" podStartSLOduration=0.87289726 podStartE2EDuration="15.170001299s" podCreationTimestamp="2025-03-25 02:32:17 +0000 UTC" firstStartedPulling="2025-03-25 02:32:17.685419556 +0000 UTC m=+13.698924600" lastFinishedPulling="2025-03-25 02:32:31.982523594 +0000 UTC m=+27.996028639" observedRunningTime="2025-03-25 02:32:32.168962755 +0000 UTC m=+28.182467832" watchObservedRunningTime="2025-03-25 02:32:32.170001299 +0000 UTC m=+28.183506360" Mar 25 02:32:32.210415 containerd[1835]: time="2025-03-25T02:32:32.210385201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"57dc861c23b1582297ca8b7c0560b2fa35a84930f68e6e49cd8a9fc2f305757b\" pid:4351 exit_status:1 exited_at:{seconds:1742869952 nanos:210196731}" Mar 25 02:32:33.200694 containerd[1835]: time="2025-03-25T02:32:33.200663956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"25c88112582124afb5506e4bbcfd07ba3d272efb238aa64bd8f45cc246f8411e\" pid:4412 exit_status:1 exited_at:{seconds:1742869953 nanos:200429934}" Mar 25 02:32:33.940278 systemd[1]: Started sshd@10-86.109.11.215:22-186.118.142.216:60574.service - OpenSSH per-connection server daemon (186.118.142.216:60574). Mar 25 02:32:34.686528 sshd[4556]: Invalid user jamalhp from 186.118.142.216 port 60574 Mar 25 02:32:34.820920 sshd[4556]: Received disconnect from 186.118.142.216 port 60574:11: Bye Bye [preauth] Mar 25 02:32:34.820920 sshd[4556]: Disconnected from invalid user jamalhp 186.118.142.216 port 60574 [preauth] Mar 25 02:32:34.824205 systemd[1]: sshd@10-86.109.11.215:22-186.118.142.216:60574.service: Deactivated successfully. Mar 25 02:32:38.028776 containerd[1835]: time="2025-03-25T02:32:38.028747980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fvw5k,Uid:cabb4ada-1163-4bc2-be75-75e2593d4d12,Namespace:kube-system,Attempt:0,}" Mar 25 02:32:38.086764 systemd-networkd[1742]: cali6ea63c4fea2: Link UP Mar 25 02:32:38.086904 systemd-networkd[1742]: cali6ea63c4fea2: Gained carrier Mar 25 02:32:38.093271 containerd[1835]: 2025-03-25 02:32:38.041 [INFO][4712] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:32:38.093271 containerd[1835]: 2025-03-25 02:32:38.047 [INFO][4712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0 coredns-6f6b679f8f- kube-system cabb4ada-1163-4bc2-be75-75e2593d4d12 657 0 2025-03-25 02:32:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-a-336c6dbb20 coredns-6f6b679f8f-fvw5k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ea63c4fea2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-" Mar 25 02:32:38.093271 containerd[1835]: 2025-03-25 02:32:38.047 [INFO][4712] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" Mar 25 02:32:38.093271 containerd[1835]: 2025-03-25 02:32:38.062 [INFO][4737] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" HandleID="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Workload="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.067 [INFO][4737] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" HandleID="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Workload="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000449600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-a-336c6dbb20", "pod":"coredns-6f6b679f8f-fvw5k", "timestamp":"2025-03-25 02:32:38.062604827 +0000 UTC"}, Hostname:"ci-4284.0.0-a-336c6dbb20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.067 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.067 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.067 [INFO][4737] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-336c6dbb20' Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.068 [INFO][4737] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.070 [INFO][4737] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.072 [INFO][4737] ipam/ipam.go 489: Trying affinity for 192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.073 [INFO][4737] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093525 containerd[1835]: 2025-03-25 02:32:38.074 [INFO][4737] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093869 containerd[1835]: 2025-03-25 02:32:38.074 [INFO][4737] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093869 containerd[1835]: 2025-03-25 02:32:38.075 [INFO][4737] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685 Mar 25 02:32:38.093869 containerd[1835]: 2025-03-25 02:32:38.077 [INFO][4737] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093869 containerd[1835]: 2025-03-25 02:32:38.080 [INFO][4737] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.65/26] block=192.168.92.64/26 handle="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093869 containerd[1835]: 2025-03-25 02:32:38.080 [INFO][4737] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.65/26] handle="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:38.093869 containerd[1835]: 2025-03-25 02:32:38.080 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:32:38.093869 containerd[1835]: 2025-03-25 02:32:38.080 [INFO][4737] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.65/26] IPv6=[] ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" HandleID="k8s-pod-network.e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Workload="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" Mar 25 02:32:38.094117 containerd[1835]: 2025-03-25 02:32:38.082 [INFO][4712] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cabb4ada-1163-4bc2-be75-75e2593d4d12", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"", Pod:"coredns-6f6b679f8f-fvw5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ea63c4fea2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:38.094117 containerd[1835]: 2025-03-25 02:32:38.082 [INFO][4712] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.65/32] ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" Mar 25 02:32:38.094117 containerd[1835]: 2025-03-25 02:32:38.082 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ea63c4fea2 ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" Mar 25 02:32:38.094117 containerd[1835]: 2025-03-25 02:32:38.086 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" Mar 25 02:32:38.094117 containerd[1835]: 2025-03-25 02:32:38.087 [INFO][4712] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cabb4ada-1163-4bc2-be75-75e2593d4d12", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685", Pod:"coredns-6f6b679f8f-fvw5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ea63c4fea2", MAC:"0a:47:78:b5:ca:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:38.094117 containerd[1835]: 2025-03-25 02:32:38.092 [INFO][4712] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" Namespace="kube-system" Pod="coredns-6f6b679f8f-fvw5k" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--fvw5k-eth0" Mar 25 02:32:38.102750 containerd[1835]: time="2025-03-25T02:32:38.102700979Z" level=info msg="connecting to shim e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685" address="unix:///run/containerd/s/df3f7f0a0b1b5dee4d63b05bf24c71294a1eead91329d21c9839244ecae97e58" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:38.129695 systemd[1]: Started cri-containerd-e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685.scope - libcontainer container e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685. Mar 25 02:32:38.164712 containerd[1835]: time="2025-03-25T02:32:38.164690797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fvw5k,Uid:cabb4ada-1163-4bc2-be75-75e2593d4d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685\"" Mar 25 02:32:38.165718 containerd[1835]: time="2025-03-25T02:32:38.165706573Z" level=info msg="CreateContainer within sandbox \"e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 02:32:38.168850 containerd[1835]: time="2025-03-25T02:32:38.168836035Z" level=info msg="Container 73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:38.170963 containerd[1835]: time="2025-03-25T02:32:38.170922791Z" level=info msg="CreateContainer within sandbox \"e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7\"" Mar 25 02:32:38.171188 containerd[1835]: time="2025-03-25T02:32:38.171149473Z" level=info msg="StartContainer for \"73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7\"" Mar 25 02:32:38.171755 containerd[1835]: time="2025-03-25T02:32:38.171710921Z" level=info msg="connecting to shim 73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7" address="unix:///run/containerd/s/df3f7f0a0b1b5dee4d63b05bf24c71294a1eead91329d21c9839244ecae97e58" protocol=ttrpc version=3 Mar 25 02:32:38.191665 systemd[1]: Started cri-containerd-73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7.scope - libcontainer container 73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7. Mar 25 02:32:38.206570 containerd[1835]: time="2025-03-25T02:32:38.206518771Z" level=info msg="StartContainer for \"73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7\" returns successfully" Mar 25 02:32:39.030659 containerd[1835]: time="2025-03-25T02:32:39.030531707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9lr4z,Uid:f7efc5ee-b24b-4101-90f3-caf1b6873d50,Namespace:kube-system,Attempt:0,}" Mar 25 02:32:39.030659 containerd[1835]: time="2025-03-25T02:32:39.030577853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ck5hf,Uid:6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4,Namespace:calico-system,Attempt:0,}" Mar 25 02:32:39.091571 systemd-networkd[1742]: caliaa46ebe07fc: Link UP Mar 25 02:32:39.091797 systemd-networkd[1742]: caliaa46ebe07fc: Gained carrier Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.044 [INFO][4892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.051 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0 coredns-6f6b679f8f- kube-system f7efc5ee-b24b-4101-90f3-caf1b6873d50 660 0 2025-03-25 02:32:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-a-336c6dbb20 coredns-6f6b679f8f-9lr4z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa46ebe07fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.051 [INFO][4892] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.066 [INFO][4941] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" HandleID="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Workload="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.072 [INFO][4941] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" HandleID="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Workload="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002298c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-a-336c6dbb20", "pod":"coredns-6f6b679f8f-9lr4z", "timestamp":"2025-03-25 02:32:39.066784448 +0000 UTC"}, Hostname:"ci-4284.0.0-a-336c6dbb20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.072 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.072 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.072 [INFO][4941] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-336c6dbb20' Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.073 [INFO][4941] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.075 [INFO][4941] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.079 [INFO][4941] ipam/ipam.go 489: Trying affinity for 192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.080 [INFO][4941] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.082 [INFO][4941] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.082 [INFO][4941] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.083 [INFO][4941] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318 Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.086 [INFO][4941] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.089 [INFO][4941] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.66/26] block=192.168.92.64/26 handle="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.089 [INFO][4941] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.66/26] handle="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.089 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:32:39.096767 containerd[1835]: 2025-03-25 02:32:39.089 [INFO][4941] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.66/26] IPv6=[] ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" HandleID="k8s-pod-network.76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Workload="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" Mar 25 02:32:39.097258 containerd[1835]: 2025-03-25 02:32:39.090 [INFO][4892] cni-plugin/k8s.go 386: Populated endpoint ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7efc5ee-b24b-4101-90f3-caf1b6873d50", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"", Pod:"coredns-6f6b679f8f-9lr4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa46ebe07fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:39.097258 containerd[1835]: 2025-03-25 02:32:39.090 [INFO][4892] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.66/32] ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" Mar 25 02:32:39.097258 containerd[1835]: 2025-03-25 02:32:39.090 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa46ebe07fc ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" Mar 25 02:32:39.097258 containerd[1835]: 2025-03-25 02:32:39.091 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" Mar 25 02:32:39.097258 containerd[1835]: 2025-03-25 02:32:39.091 [INFO][4892] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7efc5ee-b24b-4101-90f3-caf1b6873d50", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318", Pod:"coredns-6f6b679f8f-9lr4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa46ebe07fc", MAC:"6e:4d:e6:65:6c:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:39.097258 containerd[1835]: 2025-03-25 02:32:39.095 [INFO][4892] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" Namespace="kube-system" Pod="coredns-6f6b679f8f-9lr4z" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-coredns--6f6b679f8f--9lr4z-eth0" Mar 25 02:32:39.106720 containerd[1835]: time="2025-03-25T02:32:39.106678306Z" level=info msg="connecting to shim 76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318" address="unix:///run/containerd/s/26cec48fcc40f1ec24bc39588bab111b416064255dba68a6021b13c486167204" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:39.128712 systemd[1]: Started cri-containerd-76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318.scope - libcontainer container 76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318. Mar 25 02:32:39.162073 containerd[1835]: time="2025-03-25T02:32:39.162050528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9lr4z,Uid:f7efc5ee-b24b-4101-90f3-caf1b6873d50,Namespace:kube-system,Attempt:0,} returns sandbox id \"76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318\"" Mar 25 02:32:39.163074 containerd[1835]: time="2025-03-25T02:32:39.163058503Z" level=info msg="CreateContainer within sandbox \"76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 02:32:39.166196 containerd[1835]: time="2025-03-25T02:32:39.166176952Z" level=info msg="Container 61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:39.166392 kubelet[3173]: I0325 02:32:39.166362 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fvw5k" podStartSLOduration=30.166344377 podStartE2EDuration="30.166344377s" podCreationTimestamp="2025-03-25 02:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:32:39.16632309 +0000 UTC m=+35.179828136" watchObservedRunningTime="2025-03-25 02:32:39.166344377 +0000 UTC m=+35.179849420" Mar 25 02:32:39.168820 containerd[1835]: time="2025-03-25T02:32:39.168765327Z" level=info msg="CreateContainer within sandbox \"76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c\"" Mar 25 02:32:39.169030 containerd[1835]: time="2025-03-25T02:32:39.169016893Z" level=info msg="StartContainer for \"61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c\"" Mar 25 02:32:39.169578 containerd[1835]: time="2025-03-25T02:32:39.169565706Z" level=info msg="connecting to shim 61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c" address="unix:///run/containerd/s/26cec48fcc40f1ec24bc39588bab111b416064255dba68a6021b13c486167204" protocol=ttrpc version=3 Mar 25 02:32:39.192631 systemd[1]: Started cri-containerd-61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c.scope - libcontainer container 61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c. Mar 25 02:32:39.204191 systemd-networkd[1742]: califebfb7bfc0f: Link UP Mar 25 02:32:39.204342 systemd-networkd[1742]: califebfb7bfc0f: Gained carrier Mar 25 02:32:39.208675 containerd[1835]: time="2025-03-25T02:32:39.208654904Z" level=info msg="StartContainer for \"61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c\" returns successfully" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.044 [INFO][4893] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.051 [INFO][4893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0 csi-node-driver- calico-system 6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4 586 0 2025-03-25 02:32:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-a-336c6dbb20 csi-node-driver-ck5hf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califebfb7bfc0f [] []}} ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.051 [INFO][4893] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.066 [INFO][4939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" HandleID="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Workload="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.072 [INFO][4939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" HandleID="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Workload="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8300), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-a-336c6dbb20", "pod":"csi-node-driver-ck5hf", "timestamp":"2025-03-25 02:32:39.06678521 +0000 UTC"}, Hostname:"ci-4284.0.0-a-336c6dbb20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.072 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.089 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.089 [INFO][4939] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-336c6dbb20' Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.174 [INFO][4939] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.191 [INFO][4939] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.193 [INFO][4939] ipam/ipam.go 489: Trying affinity for 192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.194 [INFO][4939] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.195 [INFO][4939] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.195 [INFO][4939] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.196 [INFO][4939] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.198 [INFO][4939] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.201 [INFO][4939] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.67/26] block=192.168.92.64/26 handle="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.201 [INFO][4939] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.67/26] handle="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.201 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:32:39.210858 containerd[1835]: 2025-03-25 02:32:39.201 [INFO][4939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.67/26] IPv6=[] ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" HandleID="k8s-pod-network.ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Workload="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" Mar 25 02:32:39.211310 containerd[1835]: 2025-03-25 02:32:39.203 [INFO][4893] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4", ResourceVersion:"586", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"", Pod:"csi-node-driver-ck5hf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califebfb7bfc0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:39.211310 containerd[1835]: 2025-03-25 02:32:39.203 [INFO][4893] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.67/32] ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" Mar 25 02:32:39.211310 containerd[1835]: 2025-03-25 02:32:39.203 [INFO][4893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califebfb7bfc0f ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" Mar 25 02:32:39.211310 containerd[1835]: 2025-03-25 02:32:39.204 [INFO][4893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" Mar 25 02:32:39.211310 containerd[1835]: 2025-03-25 02:32:39.204 [INFO][4893] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4", ResourceVersion:"586", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee", Pod:"csi-node-driver-ck5hf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califebfb7bfc0f", MAC:"02:dc:36:79:d8:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:39.211310 containerd[1835]: 2025-03-25 02:32:39.209 [INFO][4893] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" Namespace="calico-system" Pod="csi-node-driver-ck5hf" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-csi--node--driver--ck5hf-eth0" Mar 25 02:32:39.219742 containerd[1835]: time="2025-03-25T02:32:39.219674922Z" level=info msg="connecting to shim ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee" address="unix:///run/containerd/s/522edafe2a0580c202c344eb463e95e0896108c997c8dc819ea79ade24b9812a" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:39.243504 systemd[1]: Started cri-containerd-ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee.scope - libcontainer container ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee. Mar 25 02:32:39.255333 containerd[1835]: time="2025-03-25T02:32:39.255299907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ck5hf,Uid:6bb6fa2a-3181-4132-9b3b-a6ba0557ddd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee\"" Mar 25 02:32:39.256333 containerd[1835]: time="2025-03-25T02:32:39.256321556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 25 02:32:40.024751 systemd-networkd[1742]: cali6ea63c4fea2: Gained IPv6LL Mar 25 02:32:40.030718 containerd[1835]: time="2025-03-25T02:32:40.030606141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-5k2s9,Uid:d5311d0f-0fd6-4d52-b978-ae321b94d4eb,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:32:40.031414 containerd[1835]: time="2025-03-25T02:32:40.030821901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699bd55f66-nb546,Uid:569a735e-7aaa-4a7c-a997-92c66cb6394c,Namespace:calico-system,Attempt:0,}" Mar 25 02:32:40.097315 systemd-networkd[1742]: califab4381574e: Link UP Mar 25 02:32:40.097432 systemd-networkd[1742]: califab4381574e: Gained carrier Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.046 [INFO][5169] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.054 [INFO][5169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0 calico-apiserver-f4c6448c9- calico-apiserver d5311d0f-0fd6-4d52-b978-ae321b94d4eb 658 0 2025-03-25 02:32:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4c6448c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-a-336c6dbb20 calico-apiserver-f4c6448c9-5k2s9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califab4381574e [] []}} ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.054 [INFO][5169] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.070 [INFO][5215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" HandleID="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.076 [INFO][5215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" HandleID="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-a-336c6dbb20", "pod":"calico-apiserver-f4c6448c9-5k2s9", "timestamp":"2025-03-25 02:32:40.07026414 +0000 UTC"}, Hostname:"ci-4284.0.0-a-336c6dbb20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.076 [INFO][5215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.076 [INFO][5215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.076 [INFO][5215] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-336c6dbb20' Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.078 [INFO][5215] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.080 [INFO][5215] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.084 [INFO][5215] ipam/ipam.go 489: Trying affinity for 192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.085 [INFO][5215] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.087 [INFO][5215] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.087 [INFO][5215] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.088 [INFO][5215] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22 Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.091 [INFO][5215] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.094 [INFO][5215] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.68/26] block=192.168.92.64/26 handle="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.094 [INFO][5215] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.68/26] handle="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.094 [INFO][5215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:32:40.102781 containerd[1835]: 2025-03-25 02:32:40.094 [INFO][5215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.68/26] IPv6=[] ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" HandleID="k8s-pod-network.6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" Mar 25 02:32:40.103294 containerd[1835]: 2025-03-25 02:32:40.095 [INFO][5169] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0", GenerateName:"calico-apiserver-f4c6448c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5311d0f-0fd6-4d52-b978-ae321b94d4eb", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4c6448c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"", Pod:"calico-apiserver-f4c6448c9-5k2s9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califab4381574e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:40.103294 containerd[1835]: 2025-03-25 02:32:40.096 [INFO][5169] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.68/32] ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" Mar 25 02:32:40.103294 containerd[1835]: 2025-03-25 02:32:40.096 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califab4381574e ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" Mar 25 02:32:40.103294 containerd[1835]: 2025-03-25 02:32:40.097 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" Mar 25 02:32:40.103294 containerd[1835]: 2025-03-25 02:32:40.097 [INFO][5169] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0", GenerateName:"calico-apiserver-f4c6448c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5311d0f-0fd6-4d52-b978-ae321b94d4eb", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4c6448c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22", Pod:"calico-apiserver-f4c6448c9-5k2s9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califab4381574e", MAC:"82:60:4a:4a:2f:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:40.103294 containerd[1835]: 2025-03-25 02:32:40.101 [INFO][5169] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-5k2s9" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--5k2s9-eth0" Mar 25 02:32:40.112508 containerd[1835]: time="2025-03-25T02:32:40.112472950Z" level=info msg="connecting to shim 6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22" address="unix:///run/containerd/s/6bde6e9a0bf87feef3a458f56e4888f0cd1763ab4139ef596f02a8d76c169d11" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:40.139645 systemd[1]: Started cri-containerd-6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22.scope - libcontainer container 6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22. Mar 25 02:32:40.152501 systemd-networkd[1742]: caliaa46ebe07fc: Gained IPv6LL Mar 25 02:32:40.175326 containerd[1835]: time="2025-03-25T02:32:40.175305216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-5k2s9,Uid:d5311d0f-0fd6-4d52-b978-ae321b94d4eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22\"" Mar 25 02:32:40.177760 kubelet[3173]: I0325 02:32:40.177731 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9lr4z" podStartSLOduration=31.177720091 podStartE2EDuration="31.177720091s" podCreationTimestamp="2025-03-25 02:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:32:40.170816822 +0000 UTC m=+36.184321872" watchObservedRunningTime="2025-03-25 02:32:40.177720091 +0000 UTC m=+36.191225129" Mar 25 02:32:40.192760 systemd-networkd[1742]: calif745b9b29db: Link UP Mar 25 02:32:40.192896 systemd-networkd[1742]: calif745b9b29db: Gained carrier Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.046 [INFO][5175] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.054 [INFO][5175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0 calico-kube-controllers-699bd55f66- calico-system 569a735e-7aaa-4a7c-a997-92c66cb6394c 654 0 2025-03-25 02:32:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:699bd55f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-a-336c6dbb20 calico-kube-controllers-699bd55f66-nb546 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif745b9b29db [] []}} ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.054 [INFO][5175] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.070 [INFO][5216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" HandleID="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.076 [INFO][5216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" HandleID="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-a-336c6dbb20", "pod":"calico-kube-controllers-699bd55f66-nb546", "timestamp":"2025-03-25 02:32:40.070249925 +0000 UTC"}, Hostname:"ci-4284.0.0-a-336c6dbb20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.076 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.094 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.094 [INFO][5216] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-336c6dbb20' Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.178 [INFO][5216] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.180 [INFO][5216] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.183 [INFO][5216] ipam/ipam.go 489: Trying affinity for 192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.184 [INFO][5216] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.185 [INFO][5216] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.185 [INFO][5216] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.186 [INFO][5216] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10 Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.188 [INFO][5216] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.191 [INFO][5216] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.69/26] block=192.168.92.64/26 handle="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.191 [INFO][5216] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.69/26] handle="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.191 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:32:40.197465 containerd[1835]: 2025-03-25 02:32:40.191 [INFO][5216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.69/26] IPv6=[] ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" HandleID="k8s-pod-network.033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" Mar 25 02:32:40.197828 containerd[1835]: 2025-03-25 02:32:40.191 [INFO][5175] cni-plugin/k8s.go 386: Populated endpoint ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0", GenerateName:"calico-kube-controllers-699bd55f66-", Namespace:"calico-system", SelfLink:"", UID:"569a735e-7aaa-4a7c-a997-92c66cb6394c", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699bd55f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"", Pod:"calico-kube-controllers-699bd55f66-nb546", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif745b9b29db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:40.197828 containerd[1835]: 2025-03-25 02:32:40.192 [INFO][5175] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.69/32] ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" Mar 25 02:32:40.197828 containerd[1835]: 2025-03-25 02:32:40.192 [INFO][5175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif745b9b29db ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" Mar 25 02:32:40.197828 containerd[1835]: 2025-03-25 02:32:40.192 [INFO][5175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" Mar 25 02:32:40.197828 containerd[1835]: 2025-03-25 02:32:40.192 [INFO][5175] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0", GenerateName:"calico-kube-controllers-699bd55f66-", Namespace:"calico-system", SelfLink:"", UID:"569a735e-7aaa-4a7c-a997-92c66cb6394c", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699bd55f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10", Pod:"calico-kube-controllers-699bd55f66-nb546", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif745b9b29db", MAC:"12:cf:5e:d2:17:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:40.197828 containerd[1835]: 2025-03-25 02:32:40.196 [INFO][5175] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" Namespace="calico-system" Pod="calico-kube-controllers-699bd55f66-nb546" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--kube--controllers--699bd55f66--nb546-eth0" Mar 25 02:32:40.205857 containerd[1835]: time="2025-03-25T02:32:40.205806872Z" level=info msg="connecting to shim 033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10" address="unix:///run/containerd/s/c8a0562900339c37433fece26702e78a4c241ed154d0ccfffebd343fc2ceaa85" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:40.227554 systemd[1]: Started cri-containerd-033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10.scope - libcontainer container 033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10. Mar 25 02:32:40.252231 containerd[1835]: time="2025-03-25T02:32:40.252208778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699bd55f66-nb546,Uid:569a735e-7aaa-4a7c-a997-92c66cb6394c,Namespace:calico-system,Attempt:0,} returns sandbox id \"033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10\"" Mar 25 02:32:40.664542 systemd-networkd[1742]: califebfb7bfc0f: Gained IPv6LL Mar 25 02:32:41.030939 containerd[1835]: time="2025-03-25T02:32:41.030702384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-8cf5b,Uid:99b09400-cf3a-44cf-b382-b27528bc0bb8,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:32:41.114323 systemd-networkd[1742]: calie99a2007441: Link UP Mar 25 02:32:41.114462 systemd-networkd[1742]: calie99a2007441: Gained carrier Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.044 [INFO][5398] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.051 [INFO][5398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0 calico-apiserver-f4c6448c9- calico-apiserver 99b09400-cf3a-44cf-b382-b27528bc0bb8 659 0 2025-03-25 02:32:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4c6448c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-a-336c6dbb20 calico-apiserver-f4c6448c9-8cf5b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie99a2007441 [] []}} ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.051 [INFO][5398] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.066 [INFO][5421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" HandleID="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.073 [INFO][5421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" HandleID="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-a-336c6dbb20", "pod":"calico-apiserver-f4c6448c9-8cf5b", "timestamp":"2025-03-25 02:32:41.066661094 +0000 UTC"}, Hostname:"ci-4284.0.0-a-336c6dbb20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.073 [INFO][5421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.073 [INFO][5421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.073 [INFO][5421] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-a-336c6dbb20' Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.074 [INFO][5421] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.077 [INFO][5421] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.080 [INFO][5421] ipam/ipam.go 489: Trying affinity for 192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.082 [INFO][5421] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.084 [INFO][5421] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.084 [INFO][5421] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.085 [INFO][5421] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.108 [INFO][5421] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.112 [INFO][5421] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.70/26] block=192.168.92.64/26 handle="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.112 [INFO][5421] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.70/26] handle="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" host="ci-4284.0.0-a-336c6dbb20" Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.112 [INFO][5421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:32:41.135302 containerd[1835]: 2025-03-25 02:32:41.112 [INFO][5421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.70/26] IPv6=[] ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" HandleID="k8s-pod-network.ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Workload="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" Mar 25 02:32:41.135780 containerd[1835]: 2025-03-25 02:32:41.113 [INFO][5398] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0", GenerateName:"calico-apiserver-f4c6448c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"99b09400-cf3a-44cf-b382-b27528bc0bb8", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4c6448c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"", Pod:"calico-apiserver-f4c6448c9-8cf5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie99a2007441", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:41.135780 containerd[1835]: 2025-03-25 02:32:41.113 [INFO][5398] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.70/32] ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" Mar 25 02:32:41.135780 containerd[1835]: 2025-03-25 02:32:41.113 [INFO][5398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie99a2007441 ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" Mar 25 02:32:41.135780 containerd[1835]: 2025-03-25 02:32:41.114 [INFO][5398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" Mar 25 02:32:41.135780 containerd[1835]: 2025-03-25 02:32:41.114 [INFO][5398] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0", GenerateName:"calico-apiserver-f4c6448c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"99b09400-cf3a-44cf-b382-b27528bc0bb8", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4c6448c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-a-336c6dbb20", ContainerID:"ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b", Pod:"calico-apiserver-f4c6448c9-8cf5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie99a2007441", MAC:"9e:66:90:45:49:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:32:41.135780 containerd[1835]: 2025-03-25 02:32:41.134 [INFO][5398] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" Namespace="calico-apiserver" Pod="calico-apiserver-f4c6448c9-8cf5b" WorkloadEndpoint="ci--4284.0.0--a--336c6dbb20-k8s-calico--apiserver--f4c6448c9--8cf5b-eth0" Mar 25 02:32:41.143966 containerd[1835]: time="2025-03-25T02:32:41.143941374Z" level=info msg="connecting to shim ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b" address="unix:///run/containerd/s/7aaf6c49547a2487eed679e7daf3f3437c5adae96aad2efd889e5d88b814568b" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:32:41.161353 containerd[1835]: time="2025-03-25T02:32:41.161331094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:41.161571 systemd[1]: Started cri-containerd-ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b.scope - libcontainer container ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b. Mar 25 02:32:41.161657 containerd[1835]: time="2025-03-25T02:32:41.161569583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 25 02:32:41.161913 containerd[1835]: time="2025-03-25T02:32:41.161897101Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:41.162742 containerd[1835]: time="2025-03-25T02:32:41.162702177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:41.163115 containerd[1835]: time="2025-03-25T02:32:41.163101659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.90676333s" Mar 25 02:32:41.163151 containerd[1835]: time="2025-03-25T02:32:41.163118355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 25 02:32:41.163562 containerd[1835]: time="2025-03-25T02:32:41.163552142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 02:32:41.164076 containerd[1835]: time="2025-03-25T02:32:41.164063950Z" level=info msg="CreateContainer within sandbox \"ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 25 02:32:41.167895 containerd[1835]: time="2025-03-25T02:32:41.167882897Z" level=info msg="Container 230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:41.171059 containerd[1835]: time="2025-03-25T02:32:41.171044311Z" level=info msg="CreateContainer within sandbox \"ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be\"" Mar 25 02:32:41.171348 containerd[1835]: time="2025-03-25T02:32:41.171312260Z" level=info msg="StartContainer for \"230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be\"" Mar 25 02:32:41.172053 containerd[1835]: time="2025-03-25T02:32:41.172041208Z" level=info msg="connecting to shim 230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be" address="unix:///run/containerd/s/522edafe2a0580c202c344eb463e95e0896108c997c8dc819ea79ade24b9812a" protocol=ttrpc version=3 Mar 25 02:32:41.178817 systemd[1]: Started cri-containerd-230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be.scope - libcontainer container 230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be. Mar 25 02:32:41.186743 containerd[1835]: time="2025-03-25T02:32:41.186698759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4c6448c9-8cf5b,Uid:99b09400-cf3a-44cf-b382-b27528bc0bb8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b\"" Mar 25 02:32:41.196871 containerd[1835]: time="2025-03-25T02:32:41.196850923Z" level=info msg="StartContainer for \"230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be\" returns successfully" Mar 25 02:32:41.240630 systemd-networkd[1742]: califab4381574e: Gained IPv6LL Mar 25 02:32:41.408881 systemd[1]: Started sshd@11-86.109.11.215:22-64.225.98.83:60758.service - OpenSSH per-connection server daemon (64.225.98.83:60758). Mar 25 02:32:41.625596 systemd-networkd[1742]: calif745b9b29db: Gained IPv6LL Mar 25 02:32:42.254141 sshd[5538]: Invalid user superuser from 64.225.98.83 port 60758 Mar 25 02:32:42.406069 sshd[5538]: Received disconnect from 64.225.98.83 port 60758:11: Bye Bye [preauth] Mar 25 02:32:42.406069 sshd[5538]: Disconnected from invalid user superuser 64.225.98.83 port 60758 [preauth] Mar 25 02:32:42.406807 systemd[1]: sshd@11-86.109.11.215:22-64.225.98.83:60758.service: Deactivated successfully. Mar 25 02:32:42.925384 kubelet[3173]: I0325 02:32:42.925249 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:32:43.096613 systemd-networkd[1742]: calie99a2007441: Gained IPv6LL Mar 25 02:32:43.704417 kernel: bpftool[5660]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 25 02:32:43.874956 systemd-networkd[1742]: vxlan.calico: Link UP Mar 25 02:32:43.874962 systemd-networkd[1742]: vxlan.calico: Gained carrier Mar 25 02:32:43.970628 containerd[1835]: time="2025-03-25T02:32:43.970546632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:43.970829 containerd[1835]: time="2025-03-25T02:32:43.970785686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 25 02:32:43.971172 containerd[1835]: time="2025-03-25T02:32:43.971158619Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:43.972051 containerd[1835]: time="2025-03-25T02:32:43.972038950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:43.972731 containerd[1835]: time="2025-03-25T02:32:43.972715739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 2.80914886s" Mar 25 02:32:43.972774 containerd[1835]: time="2025-03-25T02:32:43.972734757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 25 02:32:43.973183 containerd[1835]: time="2025-03-25T02:32:43.973170147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 25 02:32:43.973681 containerd[1835]: time="2025-03-25T02:32:43.973663115Z" level=info msg="CreateContainer within sandbox \"6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 02:32:43.976138 containerd[1835]: time="2025-03-25T02:32:43.976124479Z" level=info msg="Container 2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:43.979200 containerd[1835]: time="2025-03-25T02:32:43.979180190Z" level=info msg="CreateContainer within sandbox \"6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d\"" Mar 25 02:32:43.979573 containerd[1835]: time="2025-03-25T02:32:43.979519933Z" level=info msg="StartContainer for \"2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d\"" Mar 25 02:32:43.980059 containerd[1835]: time="2025-03-25T02:32:43.980024509Z" level=info msg="connecting to shim 2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d" address="unix:///run/containerd/s/6bde6e9a0bf87feef3a458f56e4888f0cd1763ab4139ef596f02a8d76c169d11" protocol=ttrpc version=3 Mar 25 02:32:44.004479 systemd[1]: Started cri-containerd-2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d.scope - libcontainer container 2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d. Mar 25 02:32:44.031828 containerd[1835]: time="2025-03-25T02:32:44.031777157Z" level=info msg="StartContainer for \"2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d\" returns successfully" Mar 25 02:32:44.184398 kubelet[3173]: I0325 02:32:44.184364 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f4c6448c9-5k2s9" podStartSLOduration=23.387128144 podStartE2EDuration="27.184352486s" podCreationTimestamp="2025-03-25 02:32:17 +0000 UTC" firstStartedPulling="2025-03-25 02:32:40.175892293 +0000 UTC m=+36.189397340" lastFinishedPulling="2025-03-25 02:32:43.973116641 +0000 UTC m=+39.986621682" observedRunningTime="2025-03-25 02:32:44.184100436 +0000 UTC m=+40.197605476" watchObservedRunningTime="2025-03-25 02:32:44.184352486 +0000 UTC m=+40.197857524" Mar 25 02:32:45.180325 kubelet[3173]: I0325 02:32:45.180238 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:32:45.592581 systemd-networkd[1742]: vxlan.calico: Gained IPv6LL Mar 25 02:32:46.325000 containerd[1835]: time="2025-03-25T02:32:46.324942370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:46.325217 containerd[1835]: time="2025-03-25T02:32:46.325168212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 25 02:32:46.325537 containerd[1835]: time="2025-03-25T02:32:46.325494315Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:46.326350 containerd[1835]: time="2025-03-25T02:32:46.326312606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:46.326736 containerd[1835]: time="2025-03-25T02:32:46.326693077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.353504205s" Mar 25 02:32:46.326736 containerd[1835]: time="2025-03-25T02:32:46.326709708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 25 02:32:46.327147 containerd[1835]: time="2025-03-25T02:32:46.327135836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 02:32:46.330036 containerd[1835]: time="2025-03-25T02:32:46.330018376Z" level=info msg="CreateContainer within sandbox \"033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 25 02:32:46.332802 containerd[1835]: time="2025-03-25T02:32:46.332789403Z" level=info msg="Container 91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:46.335813 containerd[1835]: time="2025-03-25T02:32:46.335772282Z" level=info msg="CreateContainer within sandbox \"033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\"" Mar 25 02:32:46.336014 containerd[1835]: time="2025-03-25T02:32:46.336002531Z" level=info msg="StartContainer for \"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\"" Mar 25 02:32:46.336527 containerd[1835]: time="2025-03-25T02:32:46.336504756Z" level=info msg="connecting to shim 91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414" address="unix:///run/containerd/s/c8a0562900339c37433fece26702e78a4c241ed154d0ccfffebd343fc2ceaa85" protocol=ttrpc version=3 Mar 25 02:32:46.357603 systemd[1]: Started cri-containerd-91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414.scope - libcontainer container 91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414. Mar 25 02:32:46.384514 containerd[1835]: time="2025-03-25T02:32:46.384465181Z" level=info msg="StartContainer for \"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" returns successfully" Mar 25 02:32:46.732324 containerd[1835]: time="2025-03-25T02:32:46.732265252Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:46.732534 containerd[1835]: time="2025-03-25T02:32:46.732441242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 25 02:32:46.733690 containerd[1835]: time="2025-03-25T02:32:46.733641534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 406.491593ms" Mar 25 02:32:46.733690 containerd[1835]: time="2025-03-25T02:32:46.733656651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 25 02:32:46.734293 containerd[1835]: time="2025-03-25T02:32:46.734282143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 25 02:32:46.735016 containerd[1835]: time="2025-03-25T02:32:46.734987436Z" level=info msg="CreateContainer within sandbox \"ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 02:32:46.737898 containerd[1835]: time="2025-03-25T02:32:46.737858544Z" level=info msg="Container c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:46.740757 containerd[1835]: time="2025-03-25T02:32:46.740716791Z" level=info msg="CreateContainer within sandbox \"ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48\"" Mar 25 02:32:46.740993 containerd[1835]: time="2025-03-25T02:32:46.740933768Z" level=info msg="StartContainer for \"c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48\"" Mar 25 02:32:46.742537 containerd[1835]: time="2025-03-25T02:32:46.742510834Z" level=info msg="connecting to shim c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48" address="unix:///run/containerd/s/7aaf6c49547a2487eed679e7daf3f3437c5adae96aad2efd889e5d88b814568b" protocol=ttrpc version=3 Mar 25 02:32:46.763589 systemd[1]: Started cri-containerd-c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48.scope - libcontainer container c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48. Mar 25 02:32:46.791678 containerd[1835]: time="2025-03-25T02:32:46.791626877Z" level=info msg="StartContainer for \"c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48\" returns successfully" Mar 25 02:32:47.190606 kubelet[3173]: I0325 02:32:47.190513 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f4c6448c9-8cf5b" podStartSLOduration=24.643444048 podStartE2EDuration="30.190489892s" podCreationTimestamp="2025-03-25 02:32:17 +0000 UTC" firstStartedPulling="2025-03-25 02:32:41.187185385 +0000 UTC m=+37.200690425" lastFinishedPulling="2025-03-25 02:32:46.734231229 +0000 UTC m=+42.747736269" observedRunningTime="2025-03-25 02:32:47.190461492 +0000 UTC m=+43.203966534" watchObservedRunningTime="2025-03-25 02:32:47.190489892 +0000 UTC m=+43.203994936" Mar 25 02:32:47.197797 kubelet[3173]: I0325 02:32:47.197760 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-699bd55f66-nb546" podStartSLOduration=24.123416602 podStartE2EDuration="30.197746157s" podCreationTimestamp="2025-03-25 02:32:17 +0000 UTC" firstStartedPulling="2025-03-25 02:32:40.252754755 +0000 UTC m=+36.266259796" lastFinishedPulling="2025-03-25 02:32:46.327084311 +0000 UTC m=+42.340589351" observedRunningTime="2025-03-25 02:32:47.197591608 +0000 UTC m=+43.211096648" watchObservedRunningTime="2025-03-25 02:32:47.197746157 +0000 UTC m=+43.211251195" Mar 25 02:32:48.188129 kubelet[3173]: I0325 02:32:48.188065 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:32:48.188129 kubelet[3173]: I0325 02:32:48.188108 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:32:48.538648 containerd[1835]: time="2025-03-25T02:32:48.538562675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:48.538866 containerd[1835]: time="2025-03-25T02:32:48.538782109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 25 02:32:48.539128 containerd[1835]: time="2025-03-25T02:32:48.539115223Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:48.539991 containerd[1835]: time="2025-03-25T02:32:48.539968802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:32:48.540579 containerd[1835]: time="2025-03-25T02:32:48.540562167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.806265488s" Mar 25 02:32:48.540647 containerd[1835]: time="2025-03-25T02:32:48.540583819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 25 02:32:48.541485 containerd[1835]: time="2025-03-25T02:32:48.541473271Z" level=info msg="CreateContainer within sandbox \"ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 25 02:32:48.544694 containerd[1835]: time="2025-03-25T02:32:48.544653063Z" level=info msg="Container ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:32:48.549542 containerd[1835]: time="2025-03-25T02:32:48.549486375Z" level=info msg="CreateContainer within sandbox \"ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65\"" Mar 25 02:32:48.549731 containerd[1835]: time="2025-03-25T02:32:48.549692702Z" level=info msg="StartContainer for \"ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65\"" Mar 25 02:32:48.550492 containerd[1835]: time="2025-03-25T02:32:48.550479903Z" level=info msg="connecting to shim ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65" address="unix:///run/containerd/s/522edafe2a0580c202c344eb463e95e0896108c997c8dc819ea79ade24b9812a" protocol=ttrpc version=3 Mar 25 02:32:48.580654 systemd[1]: Started cri-containerd-ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65.scope - libcontainer container ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65. Mar 25 02:32:48.601418 containerd[1835]: time="2025-03-25T02:32:48.601388949Z" level=info msg="StartContainer for \"ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65\" returns successfully" Mar 25 02:32:49.072896 kubelet[3173]: I0325 02:32:49.072794 3173 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 25 02:32:49.072896 kubelet[3173]: I0325 02:32:49.072862 3173 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 25 02:32:49.224969 kubelet[3173]: I0325 02:32:49.224833 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ck5hf" podStartSLOduration=22.940057479 podStartE2EDuration="32.224791008s" podCreationTimestamp="2025-03-25 02:32:17 +0000 UTC" firstStartedPulling="2025-03-25 02:32:39.256208922 +0000 UTC m=+35.269713962" lastFinishedPulling="2025-03-25 02:32:48.54094245 +0000 UTC m=+44.554447491" observedRunningTime="2025-03-25 02:32:49.223668514 +0000 UTC m=+45.237173652" watchObservedRunningTime="2025-03-25 02:32:49.224791008 +0000 UTC m=+45.238296099" Mar 25 02:32:54.153593 kubelet[3173]: I0325 02:32:54.153541 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:32:54.232287 containerd[1835]: time="2025-03-25T02:32:54.232249672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"d36858ccd77e6ce6b5b4c7205eb417d732c57ef642f31954d421ba95a9f9da9e\" pid:6049 exited_at:{seconds:1742869974 nanos:232009025}" Mar 25 02:32:54.301522 containerd[1835]: time="2025-03-25T02:32:54.301485657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"ad6382ee5cd7119e24239f5cedf2b349003b340bb17515ab588df5f4c9551433\" pid:6071 exited_at:{seconds:1742869974 nanos:301294329}" Mar 25 02:32:54.677116 containerd[1835]: time="2025-03-25T02:32:54.677089610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"f12be3aae8b7a0f8a918a111c12cdb9d89d211e23e421af66bbfed1a6db56f54\" pid:6092 exited_at:{seconds:1742869974 nanos:676901255}" Mar 25 02:33:08.146779 containerd[1835]: time="2025-03-25T02:33:08.146710067Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"722736a667cf440d441083aeac52f9f7f9e05b67f01879812a740578211145a9\" pid:6131 exited_at:{seconds:1742869988 nanos:146565601}" Mar 25 02:33:11.973751 systemd[1]: Started sshd@12-86.109.11.215:22-74.82.195.39:38910.service - OpenSSH per-connection server daemon (74.82.195.39:38910). Mar 25 02:33:12.859493 sshd[6148]: Invalid user dev from 74.82.195.39 port 38910 Mar 25 02:33:13.041497 sshd[6148]: Received disconnect from 74.82.195.39 port 38910:11: Bye Bye [preauth] Mar 25 02:33:13.041497 sshd[6148]: Disconnected from invalid user dev 74.82.195.39 port 38910 [preauth] Mar 25 02:33:13.044775 systemd[1]: sshd@12-86.109.11.215:22-74.82.195.39:38910.service: Deactivated successfully. Mar 25 02:33:20.049220 kubelet[3173]: I0325 02:33:20.049164 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:33:22.807394 kubelet[3173]: I0325 02:33:22.807260 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:33:24.201712 containerd[1835]: time="2025-03-25T02:33:24.201680537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"fe65b24165ae11ea734024c5249cd6d70cfb99f8d29f94bc5e929fbd220843da\" pid:6176 exited_at:{seconds:1742870004 nanos:201504033}" Mar 25 02:33:24.689562 containerd[1835]: time="2025-03-25T02:33:24.689531778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"0bcad2bf3bf239baa17114a930a921911ed843bfb32eff7c1b242647a513ee25\" pid:6197 exited_at:{seconds:1742870004 nanos:689310815}" Mar 25 02:33:32.954874 systemd[1]: Started sshd@13-86.109.11.215:22-45.148.10.240:59452.service - OpenSSH per-connection server daemon (45.148.10.240:59452). Mar 25 02:33:33.531349 sshd[6216]: Invalid user ysz from 45.148.10.240 port 59452 Mar 25 02:33:33.668006 sshd[6216]: Connection closed by invalid user ysz 45.148.10.240 port 59452 [preauth] Mar 25 02:33:33.671320 systemd[1]: sshd@13-86.109.11.215:22-45.148.10.240:59452.service: Deactivated successfully. Mar 25 02:33:43.869492 systemd[1]: Started sshd@14-86.109.11.215:22-186.118.142.216:58122.service - OpenSSH per-connection server daemon (186.118.142.216:58122). Mar 25 02:33:44.599387 sshd[6223]: Invalid user jhernandez from 186.118.142.216 port 58122 Mar 25 02:33:44.730327 sshd[6223]: Received disconnect from 186.118.142.216 port 58122:11: Bye Bye [preauth] Mar 25 02:33:44.730327 sshd[6223]: Disconnected from invalid user jhernandez 186.118.142.216 port 58122 [preauth] Mar 25 02:33:44.733629 systemd[1]: sshd@14-86.109.11.215:22-186.118.142.216:58122.service: Deactivated successfully. Mar 25 02:33:54.255860 containerd[1835]: time="2025-03-25T02:33:54.255826019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"021fc56acfad175110eb09c9a6a5ef233634858ed888ff8dd735c55541af2172\" pid:6239 exited_at:{seconds:1742870034 nanos:255644334}" Mar 25 02:33:54.681220 containerd[1835]: time="2025-03-25T02:33:54.681169207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"c67fce59c175b46af3666302548fb4b4bf20a6c7c16d0443c4a74b970e75b7b0\" pid:6261 exited_at:{seconds:1742870034 nanos:680950764}" Mar 25 02:34:08.122701 containerd[1835]: time="2025-03-25T02:34:08.122679753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"88fccc0f0bb57207d892013defcb6fb3e28c24a0827f24a8496feb88fd92ba8d\" pid:6302 exited_at:{seconds:1742870048 nanos:122566225}" Mar 25 02:34:24.201186 containerd[1835]: time="2025-03-25T02:34:24.201156652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"a3551269d90b8921af91af19704b07499673cc57f4425d4bb5f9eefb9b3ba36a\" pid:6346 exited_at:{seconds:1742870064 nanos:200987820}" Mar 25 02:34:24.654517 systemd[1]: Started sshd@15-86.109.11.215:22-64.62.156.117:40163.service - OpenSSH per-connection server daemon (64.62.156.117:40163). Mar 25 02:34:24.666884 sshd[6375]: banner exchange: Connection from 64.62.156.117 port 40163: invalid format Mar 25 02:34:24.667235 systemd[1]: sshd@15-86.109.11.215:22-64.62.156.117:40163.service: Deactivated successfully. Mar 25 02:34:24.673364 containerd[1835]: time="2025-03-25T02:34:24.673342796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"8c1159ae8edb54009d34feaf694152e7e26fa0c1f378cb289e95ee81f38b4a3f\" pid:6369 exited_at:{seconds:1742870064 nanos:673015949}" Mar 25 02:34:44.379547 systemd[1]: Started sshd@16-86.109.11.215:22-74.82.195.39:38098.service - OpenSSH per-connection server daemon (74.82.195.39:38098). Mar 25 02:34:45.244735 sshd[6394]: Invalid user jh from 74.82.195.39 port 38098 Mar 25 02:34:45.547313 sshd[6394]: Received disconnect from 74.82.195.39 port 38098:11: Bye Bye [preauth] Mar 25 02:34:45.547313 sshd[6394]: Disconnected from invalid user jh 74.82.195.39 port 38098 [preauth] Mar 25 02:34:45.550584 systemd[1]: sshd@16-86.109.11.215:22-74.82.195.39:38098.service: Deactivated successfully. Mar 25 02:34:52.543684 systemd[1]: Started sshd@17-86.109.11.215:22-186.118.142.216:57692.service - OpenSSH per-connection server daemon (186.118.142.216:57692). Mar 25 02:34:53.243440 sshd[6399]: Invalid user python from 186.118.142.216 port 57692 Mar 25 02:34:53.373910 sshd[6399]: Received disconnect from 186.118.142.216 port 57692:11: Bye Bye [preauth] Mar 25 02:34:53.373910 sshd[6399]: Disconnected from invalid user python 186.118.142.216 port 57692 [preauth] Mar 25 02:34:53.377234 systemd[1]: sshd@17-86.109.11.215:22-186.118.142.216:57692.service: Deactivated successfully. Mar 25 02:34:54.248576 containerd[1835]: time="2025-03-25T02:34:54.248546101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"270875690459ac2d929f9c188b7248f3bfe074892da465060786dd56aab62cf1\" pid:6414 exited_at:{seconds:1742870094 nanos:248345747}" Mar 25 02:34:54.689467 containerd[1835]: time="2025-03-25T02:34:54.689407117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"99bcd23cdfda8db2022428f815155866e76aae6259c77d4f7235a019bd328915\" pid:6437 exited_at:{seconds:1742870094 nanos:689153716}" Mar 25 02:35:08.124424 containerd[1835]: time="2025-03-25T02:35:08.124354128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"d20f7a3f0ce6a07e613aa94e457bf57120c3ffdc5173860d128cc80cdf6a7dfe\" pid:6470 exited_at:{seconds:1742870108 nanos:124200437}" Mar 25 02:35:24.232397 containerd[1835]: time="2025-03-25T02:35:24.232327694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"efd078c9d6b3d33098c6243cfb0f8d8fca80de2c3b89246586a5dab990173f98\" pid:6500 exited_at:{seconds:1742870124 nanos:232128907}" Mar 25 02:35:24.715226 containerd[1835]: time="2025-03-25T02:35:24.715192386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"4638fa2e018bcc0a968ae8ff9478a4158adaf1dcfc16ffea44d7cb43d3c7b320\" pid:6520 exited_at:{seconds:1742870124 nanos:714993669}" Mar 25 02:35:54.208662 containerd[1835]: time="2025-03-25T02:35:54.208601556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"8460044db9510cc81c9ee984765587b045765ddcce7ac25fe9f4ecf41bf4faa8\" pid:6556 exited_at:{seconds:1742870154 nanos:208429017}" Mar 25 02:35:54.684235 containerd[1835]: time="2025-03-25T02:35:54.684203219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"5990fa6e1abef4e4c6423999552f521de5dc9a7206320be999cf321789edfe03\" pid:6578 exited_at:{seconds:1742870154 nanos:683992607}" Mar 25 02:36:02.717458 systemd[1]: Started sshd@18-86.109.11.215:22-186.118.142.216:56620.service - OpenSSH per-connection server daemon (186.118.142.216:56620). Mar 25 02:36:03.419927 sshd[6612]: Invalid user fergusonk from 186.118.142.216 port 56620 Mar 25 02:36:03.549312 sshd[6612]: Received disconnect from 186.118.142.216 port 56620:11: Bye Bye [preauth] Mar 25 02:36:03.549312 sshd[6612]: Disconnected from invalid user fergusonk 186.118.142.216 port 56620 [preauth] Mar 25 02:36:03.552711 systemd[1]: sshd@18-86.109.11.215:22-186.118.142.216:56620.service: Deactivated successfully. Mar 25 02:36:08.111837 containerd[1835]: time="2025-03-25T02:36:08.111808002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"c9f2ad9204a056e016400565f6d1ead6e6c4c8b7eef753151e8acc60f6204da0\" pid:6629 exited_at:{seconds:1742870168 nanos:111630414}" Mar 25 02:36:19.569749 systemd[1]: Started sshd@19-86.109.11.215:22-74.82.195.39:37282.service - OpenSSH per-connection server daemon (74.82.195.39:37282). Mar 25 02:36:20.561352 sshd[6642]: Invalid user kxw from 74.82.195.39 port 37282 Mar 25 02:36:20.740149 sshd[6642]: Received disconnect from 74.82.195.39 port 37282:11: Bye Bye [preauth] Mar 25 02:36:20.740149 sshd[6642]: Disconnected from invalid user kxw 74.82.195.39 port 37282 [preauth] Mar 25 02:36:20.743518 systemd[1]: sshd@19-86.109.11.215:22-74.82.195.39:37282.service: Deactivated successfully. Mar 25 02:36:24.195479 containerd[1835]: time="2025-03-25T02:36:24.195456568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"7fa977338de6337a9aa393474378158b05f8213a3e7f42038412b09d182c4586\" pid:6658 exited_at:{seconds:1742870184 nanos:195328721}" Mar 25 02:36:24.679815 containerd[1835]: time="2025-03-25T02:36:24.679788673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"88ae795ad84d8dce575260f05321c8cc280245b6d6c5e2b1b8842a6d315180a0\" pid:6680 exited_at:{seconds:1742870184 nanos:679564215}" Mar 25 02:36:29.487069 update_engine[1820]: I20250325 02:36:29.486974 1820 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 25 02:36:29.487069 update_engine[1820]: I20250325 02:36:29.487073 1820 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.487494 1820 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.488533 1820 omaha_request_params.cc:62] Current group set to alpha Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.488768 1820 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.488809 1820 update_attempter.cc:643] Scheduling an action processor start. Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.488865 1820 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.488968 1820 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.489147 1820 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.489177 1820 omaha_request_action.cc:272] Request: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.489195 1820 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.492067 1820 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.492259 1820 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:36:29.512811 update_engine[1820]: E20250325 02:36:29.492600 1820 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:36:29.512811 update_engine[1820]: I20250325 02:36:29.492631 1820 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 25 02:36:29.513162 locksmithd[1871]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 25 02:36:39.397195 update_engine[1820]: I20250325 02:36:39.397013 1820 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:36:39.398140 update_engine[1820]: I20250325 02:36:39.397607 1820 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:36:39.398246 update_engine[1820]: I20250325 02:36:39.398197 1820 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:36:39.398875 update_engine[1820]: E20250325 02:36:39.398761 1820 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:36:39.399065 update_engine[1820]: I20250325 02:36:39.398924 1820 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 25 02:36:49.396713 update_engine[1820]: I20250325 02:36:49.396548 1820 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:36:49.397620 update_engine[1820]: I20250325 02:36:49.397102 1820 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:36:49.397843 update_engine[1820]: I20250325 02:36:49.397773 1820 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:36:49.398298 update_engine[1820]: E20250325 02:36:49.398175 1820 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:36:49.398496 update_engine[1820]: I20250325 02:36:49.398313 1820 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 25 02:36:54.258873 containerd[1835]: time="2025-03-25T02:36:54.258844092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"ea6f5f63a911e81f62432aedf2c8663838e2cb2fe8111d458923dbbf26736e29\" pid:6716 exited_at:{seconds:1742870214 nanos:258672543}" Mar 25 02:36:54.677407 containerd[1835]: time="2025-03-25T02:36:54.677376179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"602a451d6467615ca24fc3c7ac9971a6de14f9023deda96a165b43f83393537c\" pid:6738 exited_at:{seconds:1742870214 nanos:677176419}" Mar 25 02:36:59.397712 update_engine[1820]: I20250325 02:36:59.397543 1820 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:36:59.398734 update_engine[1820]: I20250325 02:36:59.398090 1820 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:36:59.398852 update_engine[1820]: I20250325 02:36:59.398733 1820 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:36:59.399511 update_engine[1820]: E20250325 02:36:59.399384 1820 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:36:59.399701 update_engine[1820]: I20250325 02:36:59.399533 1820 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 25 02:36:59.399701 update_engine[1820]: I20250325 02:36:59.399564 1820 omaha_request_action.cc:617] Omaha request response: Mar 25 02:36:59.400014 update_engine[1820]: E20250325 02:36:59.399726 1820 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 25 02:36:59.400014 update_engine[1820]: I20250325 02:36:59.399775 1820 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 25 02:36:59.400014 update_engine[1820]: I20250325 02:36:59.399793 1820 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 02:36:59.400014 update_engine[1820]: I20250325 02:36:59.399811 1820 update_attempter.cc:306] Processing Done. Mar 25 02:36:59.400014 update_engine[1820]: E20250325 02:36:59.399842 1820 update_attempter.cc:619] Update failed. Mar 25 02:36:59.400014 update_engine[1820]: I20250325 02:36:59.399859 1820 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 25 02:36:59.400014 update_engine[1820]: I20250325 02:36:59.399875 1820 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 25 02:36:59.400014 update_engine[1820]: I20250325 02:36:59.399891 1820 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 25 02:36:59.400697 update_engine[1820]: I20250325 02:36:59.400048 1820 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 25 02:36:59.400697 update_engine[1820]: I20250325 02:36:59.400110 1820 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 25 02:36:59.400697 update_engine[1820]: I20250325 02:36:59.400130 1820 omaha_request_action.cc:272] Request: Mar 25 02:36:59.400697 update_engine[1820]: Mar 25 02:36:59.400697 update_engine[1820]: Mar 25 02:36:59.400697 update_engine[1820]: Mar 25 02:36:59.400697 update_engine[1820]: Mar 25 02:36:59.400697 update_engine[1820]: Mar 25 02:36:59.400697 update_engine[1820]: Mar 25 02:36:59.400697 update_engine[1820]: I20250325 02:36:59.400147 1820 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:36:59.400697 update_engine[1820]: I20250325 02:36:59.400624 1820 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:36:59.401605 locksmithd[1871]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401124 1820 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:36:59.402247 update_engine[1820]: E20250325 02:36:59.401624 1820 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401731 1820 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401755 1820 omaha_request_action.cc:617] Omaha request response: Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401773 1820 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401788 1820 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401803 1820 update_attempter.cc:306] Processing Done. Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401819 1820 update_attempter.cc:310] Error event sent. Mar 25 02:36:59.402247 update_engine[1820]: I20250325 02:36:59.401844 1820 update_check_scheduler.cc:74] Next update check in 42m44s Mar 25 02:36:59.403028 locksmithd[1871]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 25 02:36:59.784623 containerd[1835]: time="2025-03-25T02:36:59.784274297Z" level=warning msg="container event discarded" container=71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1 type=CONTAINER_CREATED_EVENT Mar 25 02:36:59.784623 containerd[1835]: time="2025-03-25T02:36:59.784449106Z" level=warning msg="container event discarded" container=71dd7f520d2811d718b802443d996bd5cfabce43f68ffda18919c93f6755abe1 type=CONTAINER_STARTED_EVENT Mar 25 02:36:59.797936 containerd[1835]: time="2025-03-25T02:36:59.797780578Z" level=warning msg="container event discarded" container=791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3 type=CONTAINER_CREATED_EVENT Mar 25 02:36:59.797936 containerd[1835]: time="2025-03-25T02:36:59.797891079Z" level=warning msg="container event discarded" container=791aea4de1ac8e938eb9280b8fbd7069c6e736b2553425218053037cea5ef8b3 type=CONTAINER_STARTED_EVENT Mar 25 02:36:59.797936 containerd[1835]: time="2025-03-25T02:36:59.797930987Z" level=warning msg="container event discarded" container=a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53 type=CONTAINER_CREATED_EVENT Mar 25 02:36:59.797936 containerd[1835]: time="2025-03-25T02:36:59.797955344Z" level=warning msg="container event discarded" container=a5daf83d2250dced5ae4c8aa20587aadfa65bb191fc5bde860938cc5acab7b53 type=CONTAINER_STARTED_EVENT Mar 25 02:36:59.798524 containerd[1835]: time="2025-03-25T02:36:59.797979345Z" level=warning msg="container event discarded" container=32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c type=CONTAINER_CREATED_EVENT Mar 25 02:36:59.798524 containerd[1835]: time="2025-03-25T02:36:59.798002382Z" level=warning msg="container event discarded" container=e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0 type=CONTAINER_CREATED_EVENT Mar 25 02:36:59.798524 containerd[1835]: time="2025-03-25T02:36:59.798022708Z" level=warning msg="container event discarded" container=0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e type=CONTAINER_CREATED_EVENT Mar 25 02:36:59.854559 containerd[1835]: time="2025-03-25T02:36:59.854366937Z" level=warning msg="container event discarded" container=e1a9547b2391acddfc19a9594a0aed702bbd316e9a2a072e3cdc4fb42b4b37f0 type=CONTAINER_STARTED_EVENT Mar 25 02:36:59.854559 containerd[1835]: time="2025-03-25T02:36:59.854509705Z" level=warning msg="container event discarded" container=32e32f63813130bf3e203a4873d8289866893318111da9be8487d535ead0500c type=CONTAINER_STARTED_EVENT Mar 25 02:36:59.854559 containerd[1835]: time="2025-03-25T02:36:59.854554060Z" level=warning msg="container event discarded" container=0243664af897acceb707602cfd91ca0ea1680891c2c45f74941a6495e2b8da9e type=CONTAINER_STARTED_EVENT Mar 25 02:37:08.114553 containerd[1835]: time="2025-03-25T02:37:08.114504132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"027ec43e9be4c7baf3b79c7d67699d50b9ffb189b9f9e08617127b80e8880f5c\" pid:6771 exited_at:{seconds:1742870228 nanos:114367687}" Mar 25 02:37:09.472320 containerd[1835]: time="2025-03-25T02:37:09.472160082Z" level=warning msg="container event discarded" container=aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb type=CONTAINER_CREATED_EVENT Mar 25 02:37:09.472320 containerd[1835]: time="2025-03-25T02:37:09.472307398Z" level=warning msg="container event discarded" container=aa13ed8a6f79ad8a7d35034d745b2f9d7e211df9234c034430104bba02f25bbb type=CONTAINER_STARTED_EVENT Mar 25 02:37:09.472320 containerd[1835]: time="2025-03-25T02:37:09.472337297Z" level=warning msg="container event discarded" container=dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5 type=CONTAINER_CREATED_EVENT Mar 25 02:37:09.518895 containerd[1835]: time="2025-03-25T02:37:09.518750997Z" level=warning msg="container event discarded" container=dca52d584e42d5a4d85ad118a3c23a0c0d643c14ce86318658becb935ce487a5 type=CONTAINER_STARTED_EVENT Mar 25 02:37:09.590419 containerd[1835]: time="2025-03-25T02:37:09.590253772Z" level=warning msg="container event discarded" container=153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999 type=CONTAINER_CREATED_EVENT Mar 25 02:37:09.590419 containerd[1835]: time="2025-03-25T02:37:09.590344598Z" level=warning msg="container event discarded" container=153985f49fe81897b211ca25324ffe461efa11387f8f2b18f801ad3a2370d999 type=CONTAINER_STARTED_EVENT Mar 25 02:37:14.278412 systemd[1]: Started sshd@20-86.109.11.215:22-186.118.142.216:41436.service - OpenSSH per-connection server daemon (186.118.142.216:41436). Mar 25 02:37:14.478906 containerd[1835]: time="2025-03-25T02:37:14.478731284Z" level=warning msg="container event discarded" container=4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449 type=CONTAINER_CREATED_EVENT Mar 25 02:37:14.510438 containerd[1835]: time="2025-03-25T02:37:14.510236570Z" level=warning msg="container event discarded" container=4714d022244df392e831a8aad95645070f21bbcce4deab4617117d7001818449 type=CONTAINER_STARTED_EVENT Mar 25 02:37:15.015822 sshd[6784]: Invalid user hocelis from 186.118.142.216 port 41436 Mar 25 02:37:15.145866 sshd[6784]: Received disconnect from 186.118.142.216 port 41436:11: Bye Bye [preauth] Mar 25 02:37:15.145866 sshd[6784]: Disconnected from invalid user hocelis 186.118.142.216 port 41436 [preauth] Mar 25 02:37:15.149181 systemd[1]: sshd@20-86.109.11.215:22-186.118.142.216:41436.service: Deactivated successfully. Mar 25 02:37:17.695611 containerd[1835]: time="2025-03-25T02:37:17.695463972Z" level=warning msg="container event discarded" container=4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f type=CONTAINER_CREATED_EVENT Mar 25 02:37:17.695611 containerd[1835]: time="2025-03-25T02:37:17.695554832Z" level=warning msg="container event discarded" container=4cd08ea881d0e21aef4827adbfe3d0ffa276213c79aaea06b818f354fcb5478f type=CONTAINER_STARTED_EVENT Mar 25 02:37:17.710122 containerd[1835]: time="2025-03-25T02:37:17.709964477Z" level=warning msg="container event discarded" container=0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38 type=CONTAINER_CREATED_EVENT Mar 25 02:37:17.710122 containerd[1835]: time="2025-03-25T02:37:17.710059445Z" level=warning msg="container event discarded" container=0b1c55e36997127590b6072ce2d1a629514a87fcf47c0814ba0ef4510910af38 type=CONTAINER_STARTED_EVENT Mar 25 02:37:19.354999 containerd[1835]: time="2025-03-25T02:37:19.354830112Z" level=warning msg="container event discarded" container=427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b type=CONTAINER_CREATED_EVENT Mar 25 02:37:19.404533 containerd[1835]: time="2025-03-25T02:37:19.404360608Z" level=warning msg="container event discarded" container=427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b type=CONTAINER_STARTED_EVENT Mar 25 02:37:19.640474 containerd[1835]: time="2025-03-25T02:37:19.640181746Z" level=warning msg="container event discarded" container=427eb809e53fb0cbafb228ea6d17c7971f548cd5f012199c85151a709e46385b type=CONTAINER_STOPPED_EVENT Mar 25 02:37:21.634444 containerd[1835]: time="2025-03-25T02:37:21.634279887Z" level=warning msg="container event discarded" container=5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d type=CONTAINER_CREATED_EVENT Mar 25 02:37:21.684812 containerd[1835]: time="2025-03-25T02:37:21.684653789Z" level=warning msg="container event discarded" container=5b14cd56e1d5383e71120848cd0b7c671710ce836275763e24ad0abbaf63a12d type=CONTAINER_STARTED_EVENT Mar 25 02:37:24.251767 containerd[1835]: time="2025-03-25T02:37:24.251694670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"05dbfab392fe1886944ebaa0e22591af20db9f0217c998a0b535c93f2538a279\" pid:6800 exited_at:{seconds:1742870244 nanos:251476772}" Mar 25 02:37:24.672114 containerd[1835]: time="2025-03-25T02:37:24.672078032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"0a73584259474f38755ef8a42c47616ac286bfca87b57219144d609628bcbeb7\" pid:6822 exited_at:{seconds:1742870244 nanos:671808688}" Mar 25 02:37:25.393793 containerd[1835]: time="2025-03-25T02:37:25.393608342Z" level=warning msg="container event discarded" container=642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03 type=CONTAINER_CREATED_EVENT Mar 25 02:37:25.431447 containerd[1835]: time="2025-03-25T02:37:25.431273686Z" level=warning msg="container event discarded" container=642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03 type=CONTAINER_STARTED_EVENT Mar 25 02:37:26.609369 containerd[1835]: time="2025-03-25T02:37:26.609198784Z" level=warning msg="container event discarded" container=642f094b08d1cd5b97e3488a9949368bdd2e0b85d0441f72fb085ce5bf1c1f03 type=CONTAINER_STOPPED_EVENT Mar 25 02:37:32.004931 containerd[1835]: time="2025-03-25T02:37:32.004667630Z" level=warning msg="container event discarded" container=800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c type=CONTAINER_CREATED_EVENT Mar 25 02:37:32.051118 containerd[1835]: time="2025-03-25T02:37:32.050941238Z" level=warning msg="container event discarded" container=800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c type=CONTAINER_STARTED_EVENT Mar 25 02:37:38.175357 containerd[1835]: time="2025-03-25T02:37:38.175148981Z" level=warning msg="container event discarded" container=e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685 type=CONTAINER_CREATED_EVENT Mar 25 02:37:38.175357 containerd[1835]: time="2025-03-25T02:37:38.175292147Z" level=warning msg="container event discarded" container=e963f29c736ca815fa6f6f2b0d1aa564c35ff9cc278c9b3fa707092130a34685 type=CONTAINER_STARTED_EVENT Mar 25 02:37:38.175357 containerd[1835]: time="2025-03-25T02:37:38.175334135Z" level=warning msg="container event discarded" container=73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7 type=CONTAINER_CREATED_EVENT Mar 25 02:37:38.216898 containerd[1835]: time="2025-03-25T02:37:38.216759481Z" level=warning msg="container event discarded" container=73e1726fbe07efbc2b55b8c3a5642a8a0159bcec2aa64e4bdce043f02b8000a7 type=CONTAINER_STARTED_EVENT Mar 25 02:37:39.173079 containerd[1835]: time="2025-03-25T02:37:39.172915829Z" level=warning msg="container event discarded" container=76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318 type=CONTAINER_CREATED_EVENT Mar 25 02:37:39.173079 containerd[1835]: time="2025-03-25T02:37:39.173019515Z" level=warning msg="container event discarded" container=76f84520fb025778d1eb59f006a5982b5d64134f24f32489abd751fc92448318 type=CONTAINER_STARTED_EVENT Mar 25 02:37:39.173079 containerd[1835]: time="2025-03-25T02:37:39.173057442Z" level=warning msg="container event discarded" container=61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c type=CONTAINER_CREATED_EVENT Mar 25 02:37:39.218698 containerd[1835]: time="2025-03-25T02:37:39.218554065Z" level=warning msg="container event discarded" container=61048997625ae927e4f6f95ab4c19f56e63c5e4a02b5457a7e5b89dfdc0b395c type=CONTAINER_STARTED_EVENT Mar 25 02:37:39.265948 containerd[1835]: time="2025-03-25T02:37:39.265916289Z" level=warning msg="container event discarded" container=ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee type=CONTAINER_CREATED_EVENT Mar 25 02:37:39.265948 containerd[1835]: time="2025-03-25T02:37:39.265943313Z" level=warning msg="container event discarded" container=ac59134bf2b9157d79fe1ab7da9098eea2aa6d8e28a685b60bb60e9fc10735ee type=CONTAINER_STARTED_EVENT Mar 25 02:37:40.185762 containerd[1835]: time="2025-03-25T02:37:40.185706883Z" level=warning msg="container event discarded" container=6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22 type=CONTAINER_CREATED_EVENT Mar 25 02:37:40.185762 containerd[1835]: time="2025-03-25T02:37:40.185725013Z" level=warning msg="container event discarded" container=6b2ac5f729c3e62863479e10ab4ff0ce9ef2643d36e3bde9879f884317f53a22 type=CONTAINER_STARTED_EVENT Mar 25 02:37:40.263213 containerd[1835]: time="2025-03-25T02:37:40.263072727Z" level=warning msg="container event discarded" container=033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10 type=CONTAINER_CREATED_EVENT Mar 25 02:37:40.263213 containerd[1835]: time="2025-03-25T02:37:40.263187828Z" level=warning msg="container event discarded" container=033b32ae116b6afbdc811eec5753af97c7b8f5aaa5d836c6afd035596f4e6a10 type=CONTAINER_STARTED_EVENT Mar 25 02:37:41.181281 containerd[1835]: time="2025-03-25T02:37:41.181151497Z" level=warning msg="container event discarded" container=230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be type=CONTAINER_CREATED_EVENT Mar 25 02:37:41.197785 containerd[1835]: time="2025-03-25T02:37:41.197631614Z" level=warning msg="container event discarded" container=ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b type=CONTAINER_CREATED_EVENT Mar 25 02:37:41.197785 containerd[1835]: time="2025-03-25T02:37:41.197742410Z" level=warning msg="container event discarded" container=ac736a96a61d9b92fae567b3f072c721c122148f6f6b484f2f13e63e139e551b type=CONTAINER_STARTED_EVENT Mar 25 02:37:41.197785 containerd[1835]: time="2025-03-25T02:37:41.197781411Z" level=warning msg="container event discarded" container=230f0df506bb523d254492b441574a7708145a16144f8101f987f8557e83b0be type=CONTAINER_STARTED_EVENT Mar 25 02:37:43.990059 containerd[1835]: time="2025-03-25T02:37:43.989922079Z" level=warning msg="container event discarded" container=2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d type=CONTAINER_CREATED_EVENT Mar 25 02:37:44.041489 containerd[1835]: time="2025-03-25T02:37:44.041454043Z" level=warning msg="container event discarded" container=2bb71d8350414041d26cdc5cfc7dc997396cc409e96d2620a7d19dc7f610076d type=CONTAINER_STARTED_EVENT Mar 25 02:37:46.346504 containerd[1835]: time="2025-03-25T02:37:46.346357277Z" level=warning msg="container event discarded" container=91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414 type=CONTAINER_CREATED_EVENT Mar 25 02:37:46.394891 containerd[1835]: time="2025-03-25T02:37:46.394709915Z" level=warning msg="container event discarded" container=91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414 type=CONTAINER_STARTED_EVENT Mar 25 02:37:46.750705 containerd[1835]: time="2025-03-25T02:37:46.750437692Z" level=warning msg="container event discarded" container=c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48 type=CONTAINER_CREATED_EVENT Mar 25 02:37:46.802106 containerd[1835]: time="2025-03-25T02:37:46.801957932Z" level=warning msg="container event discarded" container=c1b0d7816514eca3a75fe404bb0ac0f3073b10194427cdabcbfbca5a2a410f48 type=CONTAINER_STARTED_EVENT Mar 25 02:37:48.559439 containerd[1835]: time="2025-03-25T02:37:48.559231415Z" level=warning msg="container event discarded" container=ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65 type=CONTAINER_CREATED_EVENT Mar 25 02:37:48.611968 containerd[1835]: time="2025-03-25T02:37:48.611798398Z" level=warning msg="container event discarded" container=ae3d78d13af315fd738f752dcacd0a99dfa7d1d6cbd8b5824afe5791a761db65 type=CONTAINER_STARTED_EVENT Mar 25 02:37:51.378470 systemd[1]: Started sshd@21-86.109.11.215:22-185.93.89.118:50326.service - OpenSSH per-connection server daemon (185.93.89.118:50326). Mar 25 02:37:54.249368 containerd[1835]: time="2025-03-25T02:37:54.249340379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"d4065e667e89af787ed6e97968f455ccde04a908f641460389cc29c71af97161\" pid:6875 exited_at:{seconds:1742870274 nanos:249207259}" Mar 25 02:37:54.677339 containerd[1835]: time="2025-03-25T02:37:54.677310496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"a9bd877b8324815d55bfabca26958605da7c909f369383a82b337c12148b8e3b\" pid:6897 exited_at:{seconds:1742870274 nanos:677019255}" Mar 25 02:37:54.875220 systemd[1]: Started sshd@22-86.109.11.215:22-74.82.195.39:36464.service - OpenSSH per-connection server daemon (74.82.195.39:36464). Mar 25 02:37:55.759994 sshd[6915]: Invalid user hkuspace from 74.82.195.39 port 36464 Mar 25 02:37:55.939917 sshd[6915]: Received disconnect from 74.82.195.39 port 36464:11: Bye Bye [preauth] Mar 25 02:37:55.939917 sshd[6915]: Disconnected from invalid user hkuspace 74.82.195.39 port 36464 [preauth] Mar 25 02:37:55.943211 systemd[1]: sshd@22-86.109.11.215:22-74.82.195.39:36464.service: Deactivated successfully. Mar 25 02:38:05.550275 systemd[1]: Started sshd@23-86.109.11.215:22-218.92.0.208:39906.service - OpenSSH per-connection server daemon (218.92.0.208:39906). Mar 25 02:38:05.706842 sshd[6932]: Unable to negotiate with 218.92.0.208 port 39906: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Mar 25 02:38:05.708954 systemd[1]: sshd@23-86.109.11.215:22-218.92.0.208:39906.service: Deactivated successfully. Mar 25 02:38:06.177064 systemd[1]: Started sshd@24-86.109.11.215:22-139.178.68.195:56434.service - OpenSSH per-connection server daemon (139.178.68.195:56434). Mar 25 02:38:06.264461 sshd[6940]: Accepted publickey for core from 139.178.68.195 port 56434 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:06.265702 sshd-session[6940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:06.270648 systemd-logind[1815]: New session 12 of user core. Mar 25 02:38:06.291730 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 02:38:06.388929 sshd[6942]: Connection closed by 139.178.68.195 port 56434 Mar 25 02:38:06.389110 sshd-session[6940]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:06.391299 systemd[1]: sshd@24-86.109.11.215:22-139.178.68.195:56434.service: Deactivated successfully. Mar 25 02:38:06.392350 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 02:38:06.392868 systemd-logind[1815]: Session 12 logged out. Waiting for processes to exit. Mar 25 02:38:06.393488 systemd-logind[1815]: Removed session 12. Mar 25 02:38:08.111565 containerd[1835]: time="2025-03-25T02:38:08.111540425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"d18dcfb7bc9283be1053fd39f11a7d469efeb96396c53f0e71123e577f1d0580\" pid:6983 exited_at:{seconds:1742870288 nanos:111439672}" Mar 25 02:38:10.749973 sshd[6862]: Connection closed by authenticating user root 185.93.89.118 port 50326 [preauth] Mar 25 02:38:10.753501 systemd[1]: sshd@21-86.109.11.215:22-185.93.89.118:50326.service: Deactivated successfully. Mar 25 02:38:10.919732 systemd[1]: Started sshd@25-86.109.11.215:22-185.93.89.118:7816.service - OpenSSH per-connection server daemon (185.93.89.118:7816). Mar 25 02:38:11.413398 systemd[1]: Started sshd@26-86.109.11.215:22-139.178.68.195:56450.service - OpenSSH per-connection server daemon (139.178.68.195:56450). Mar 25 02:38:11.455166 sshd[7001]: Accepted publickey for core from 139.178.68.195 port 56450 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:11.456251 sshd-session[7001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:11.459926 systemd-logind[1815]: New session 13 of user core. Mar 25 02:38:11.475862 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 02:38:11.572439 sshd[7003]: Connection closed by 139.178.68.195 port 56450 Mar 25 02:38:11.572653 sshd-session[7001]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:11.574290 systemd[1]: sshd@26-86.109.11.215:22-139.178.68.195:56450.service: Deactivated successfully. Mar 25 02:38:11.575276 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 02:38:11.576044 systemd-logind[1815]: Session 13 logged out. Waiting for processes to exit. Mar 25 02:38:11.576654 systemd-logind[1815]: Removed session 13. Mar 25 02:38:16.595866 systemd[1]: Started sshd@27-86.109.11.215:22-139.178.68.195:58220.service - OpenSSH per-connection server daemon (139.178.68.195:58220). Mar 25 02:38:16.637901 sshd[7030]: Accepted publickey for core from 139.178.68.195 port 58220 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:16.638579 sshd-session[7030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:16.641276 systemd-logind[1815]: New session 14 of user core. Mar 25 02:38:16.663973 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 02:38:16.819168 sshd[7032]: Connection closed by 139.178.68.195 port 58220 Mar 25 02:38:16.819382 sshd-session[7030]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:16.842979 systemd[1]: sshd@27-86.109.11.215:22-139.178.68.195:58220.service: Deactivated successfully. Mar 25 02:38:16.844570 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 02:38:16.845888 systemd-logind[1815]: Session 14 logged out. Waiting for processes to exit. Mar 25 02:38:16.847185 systemd[1]: Started sshd@28-86.109.11.215:22-139.178.68.195:58230.service - OpenSSH per-connection server daemon (139.178.68.195:58230). Mar 25 02:38:16.848216 systemd-logind[1815]: Removed session 14. Mar 25 02:38:16.904162 sshd[7057]: Accepted publickey for core from 139.178.68.195 port 58230 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:16.907467 sshd-session[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:16.920376 systemd-logind[1815]: New session 15 of user core. Mar 25 02:38:16.942769 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 02:38:17.092357 sshd[7061]: Connection closed by 139.178.68.195 port 58230 Mar 25 02:38:17.092526 sshd-session[7057]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:17.104614 systemd[1]: sshd@28-86.109.11.215:22-139.178.68.195:58230.service: Deactivated successfully. Mar 25 02:38:17.105531 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 02:38:17.106170 systemd-logind[1815]: Session 15 logged out. Waiting for processes to exit. Mar 25 02:38:17.106807 systemd[1]: Started sshd@29-86.109.11.215:22-139.178.68.195:58238.service - OpenSSH per-connection server daemon (139.178.68.195:58238). Mar 25 02:38:17.107173 systemd-logind[1815]: Removed session 15. Mar 25 02:38:17.139789 sshd[7084]: Accepted publickey for core from 139.178.68.195 port 58238 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:17.140519 sshd-session[7084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:17.143607 systemd-logind[1815]: New session 16 of user core. Mar 25 02:38:17.153522 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 02:38:17.282572 sshd[7087]: Connection closed by 139.178.68.195 port 58238 Mar 25 02:38:17.282744 sshd-session[7084]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:17.284324 systemd[1]: sshd@29-86.109.11.215:22-139.178.68.195:58238.service: Deactivated successfully. Mar 25 02:38:17.285269 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 02:38:17.285989 systemd-logind[1815]: Session 16 logged out. Waiting for processes to exit. Mar 25 02:38:17.286568 systemd-logind[1815]: Removed session 16. Mar 25 02:38:22.293965 systemd[1]: Started sshd@30-86.109.11.215:22-139.178.68.195:58244.service - OpenSSH per-connection server daemon (139.178.68.195:58244). Mar 25 02:38:22.328977 sshd[7119]: Accepted publickey for core from 139.178.68.195 port 58244 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:22.329738 sshd-session[7119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:22.332979 systemd-logind[1815]: New session 17 of user core. Mar 25 02:38:22.347562 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 02:38:22.443094 sshd[7121]: Connection closed by 139.178.68.195 port 58244 Mar 25 02:38:22.443259 sshd-session[7119]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:22.444882 systemd[1]: sshd@30-86.109.11.215:22-139.178.68.195:58244.service: Deactivated successfully. Mar 25 02:38:22.445821 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 02:38:22.446474 systemd-logind[1815]: Session 17 logged out. Waiting for processes to exit. Mar 25 02:38:22.447154 systemd-logind[1815]: Removed session 17. Mar 25 02:38:24.199873 containerd[1835]: time="2025-03-25T02:38:24.199834052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"0b72cd696201d03db0aaddb5e6e4069ffa5ff4aaddce66db0f256a8189af0fde\" pid:7157 exited_at:{seconds:1742870304 nanos:199627913}" Mar 25 02:38:24.687833 containerd[1835]: time="2025-03-25T02:38:24.687745622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"3227d199ee89df67d0dae53928687322ab005d7d49bb7430289f0b4aa6db03bc\" pid:7179 exited_at:{seconds:1742870304 nanos:687515979}" Mar 25 02:38:27.475234 systemd[1]: Started sshd@31-86.109.11.215:22-139.178.68.195:50524.service - OpenSSH per-connection server daemon (139.178.68.195:50524). Mar 25 02:38:27.561578 sshd[7197]: Accepted publickey for core from 139.178.68.195 port 50524 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:27.562755 sshd-session[7197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:27.567327 systemd-logind[1815]: New session 18 of user core. Mar 25 02:38:27.585686 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 02:38:27.744312 sshd[7199]: Connection closed by 139.178.68.195 port 50524 Mar 25 02:38:27.744498 sshd-session[7197]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:27.746073 systemd[1]: sshd@31-86.109.11.215:22-139.178.68.195:50524.service: Deactivated successfully. Mar 25 02:38:27.747061 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 02:38:27.747807 systemd-logind[1815]: Session 18 logged out. Waiting for processes to exit. Mar 25 02:38:27.748325 systemd-logind[1815]: Removed session 18. Mar 25 02:38:28.140526 systemd[1]: Started sshd@32-86.109.11.215:22-218.92.0.228:62016.service - OpenSSH per-connection server daemon (218.92.0.228:62016). Mar 25 02:38:28.704451 systemd[1]: Started sshd@33-86.109.11.215:22-186.118.142.216:56660.service - OpenSSH per-connection server daemon (186.118.142.216:56660). Mar 25 02:38:29.257324 sshd-session[7230]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:29.453821 sshd[7228]: Invalid user postgres from 186.118.142.216 port 56660 Mar 25 02:38:29.579822 sshd[7228]: Received disconnect from 186.118.142.216 port 56660:11: Bye Bye [preauth] Mar 25 02:38:29.579822 sshd[7228]: Disconnected from invalid user postgres 186.118.142.216 port 56660 [preauth] Mar 25 02:38:29.583127 systemd[1]: sshd@33-86.109.11.215:22-186.118.142.216:56660.service: Deactivated successfully. Mar 25 02:38:30.214293 sshd[6999]: Connection closed by authenticating user root 185.93.89.118 port 7816 [preauth] Mar 25 02:38:30.217930 systemd[1]: sshd@25-86.109.11.215:22-185.93.89.118:7816.service: Deactivated successfully. Mar 25 02:38:30.349768 sshd[7225]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:30.380903 systemd[1]: Started sshd@34-86.109.11.215:22-185.93.89.118:29610.service - OpenSSH per-connection server daemon (185.93.89.118:29610). Mar 25 02:38:30.640996 sshd-session[7238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:32.338513 systemd[1]: Started sshd@35-86.109.11.215:22-218.92.0.221:21382.service - OpenSSH per-connection server daemon (218.92.0.221:21382). Mar 25 02:38:32.536630 sshd[7225]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:32.767508 systemd[1]: Started sshd@36-86.109.11.215:22-139.178.68.195:50536.service - OpenSSH per-connection server daemon (139.178.68.195:50536). Mar 25 02:38:32.813973 sshd[7244]: Accepted publickey for core from 139.178.68.195 port 50536 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:32.814709 sshd-session[7244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:32.817926 systemd-logind[1815]: New session 19 of user core. Mar 25 02:38:32.827477 sshd-session[7242]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:32.837612 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 02:38:32.927782 sshd[7246]: Connection closed by 139.178.68.195 port 50536 Mar 25 02:38:32.927968 sshd-session[7244]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:32.929818 systemd[1]: sshd@36-86.109.11.215:22-139.178.68.195:50536.service: Deactivated successfully. Mar 25 02:38:32.930898 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 02:38:32.931697 systemd-logind[1815]: Session 19 logged out. Waiting for processes to exit. Mar 25 02:38:32.932316 systemd-logind[1815]: Removed session 19. Mar 25 02:38:33.803565 sshd-session[7270]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:34.998132 sshd[7225]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:35.143113 sshd[7225]: Received disconnect from 218.92.0.228 port 62016:11: [preauth] Mar 25 02:38:35.143113 sshd[7225]: Disconnected from authenticating user root 218.92.0.228 port 62016 [preauth] Mar 25 02:38:35.146764 systemd[1]: sshd@32-86.109.11.215:22-218.92.0.228:62016.service: Deactivated successfully. Mar 25 02:38:35.352455 systemd[1]: Started sshd@37-86.109.11.215:22-218.92.0.228:44178.service - OpenSSH per-connection server daemon (218.92.0.228:44178). Mar 25 02:38:36.109081 sshd[7240]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:36.399941 sshd-session[7277]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:36.567544 sshd-session[7278]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:37.944641 systemd[1]: Started sshd@38-86.109.11.215:22-139.178.68.195:55356.service - OpenSSH per-connection server daemon (139.178.68.195:55356). Mar 25 02:38:37.997695 sshd[7281]: Accepted publickey for core from 139.178.68.195 port 55356 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:37.998352 sshd-session[7281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:38.001117 systemd-logind[1815]: New session 20 of user core. Mar 25 02:38:38.017869 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 02:38:38.113923 sshd[7283]: Connection closed by 139.178.68.195 port 55356 Mar 25 02:38:38.114088 sshd-session[7281]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:38.119769 sshd[7240]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:38.144146 systemd[1]: sshd@38-86.109.11.215:22-139.178.68.195:55356.service: Deactivated successfully. Mar 25 02:38:38.148423 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 02:38:38.151983 systemd-logind[1815]: Session 20 logged out. Waiting for processes to exit. Mar 25 02:38:38.155383 systemd[1]: Started sshd@39-86.109.11.215:22-139.178.68.195:55370.service - OpenSSH per-connection server daemon (139.178.68.195:55370). Mar 25 02:38:38.158195 systemd-logind[1815]: Removed session 20. Mar 25 02:38:38.240959 sshd[7307]: Accepted publickey for core from 139.178.68.195 port 55370 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:38.241973 sshd-session[7307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:38.246033 systemd-logind[1815]: New session 21 of user core. Mar 25 02:38:38.256586 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 02:38:38.287508 sshd[7275]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:38.408679 sshd-session[7311]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:38.526046 sshd[7310]: Connection closed by 139.178.68.195 port 55370 Mar 25 02:38:38.526832 sshd-session[7307]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:38.558249 systemd[1]: sshd@39-86.109.11.215:22-139.178.68.195:55370.service: Deactivated successfully. Mar 25 02:38:38.562481 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 02:38:38.564783 systemd-logind[1815]: Session 21 logged out. Waiting for processes to exit. Mar 25 02:38:38.569370 systemd[1]: Started sshd@40-86.109.11.215:22-139.178.68.195:55372.service - OpenSSH per-connection server daemon (139.178.68.195:55372). Mar 25 02:38:38.572016 systemd-logind[1815]: Removed session 21. Mar 25 02:38:38.616067 sshd-session[7329]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:38.657897 sshd[7333]: Accepted publickey for core from 139.178.68.195 port 55372 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:38.661085 sshd-session[7333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:38.674032 systemd-logind[1815]: New session 22 of user core. Mar 25 02:38:38.693815 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 02:38:40.009532 sshd[7336]: Connection closed by 139.178.68.195 port 55372 Mar 25 02:38:40.009724 sshd-session[7333]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:40.021935 systemd[1]: sshd@40-86.109.11.215:22-139.178.68.195:55372.service: Deactivated successfully. Mar 25 02:38:40.022978 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 02:38:40.023111 systemd[1]: session-22.scope: Consumed 527ms CPU time, 69.1M memory peak. Mar 25 02:38:40.023818 systemd-logind[1815]: Session 22 logged out. Waiting for processes to exit. Mar 25 02:38:40.024552 systemd[1]: Started sshd@41-86.109.11.215:22-139.178.68.195:55374.service - OpenSSH per-connection server daemon (139.178.68.195:55374). Mar 25 02:38:40.025112 systemd-logind[1815]: Removed session 22. Mar 25 02:38:40.061277 sshd[7367]: Accepted publickey for core from 139.178.68.195 port 55374 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:40.062176 sshd-session[7367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:40.065722 systemd-logind[1815]: New session 23 of user core. Mar 25 02:38:40.083681 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 02:38:40.263752 sshd[7373]: Connection closed by 139.178.68.195 port 55374 Mar 25 02:38:40.263945 sshd-session[7367]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:40.279607 systemd[1]: sshd@41-86.109.11.215:22-139.178.68.195:55374.service: Deactivated successfully. Mar 25 02:38:40.280525 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 02:38:40.281352 systemd-logind[1815]: Session 23 logged out. Waiting for processes to exit. Mar 25 02:38:40.282209 systemd[1]: Started sshd@42-86.109.11.215:22-139.178.68.195:55384.service - OpenSSH per-connection server daemon (139.178.68.195:55384). Mar 25 02:38:40.282813 systemd-logind[1815]: Removed session 23. Mar 25 02:38:40.349013 sshd[7395]: Accepted publickey for core from 139.178.68.195 port 55384 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:40.350376 sshd-session[7395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:40.355431 systemd-logind[1815]: New session 24 of user core. Mar 25 02:38:40.365668 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 02:38:40.403693 sshd[7240]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:40.503333 sshd[7399]: Connection closed by 139.178.68.195 port 55384 Mar 25 02:38:40.503545 sshd-session[7395]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:40.505245 systemd[1]: sshd@42-86.109.11.215:22-139.178.68.195:55384.service: Deactivated successfully. Mar 25 02:38:40.506230 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 02:38:40.506968 systemd-logind[1815]: Session 24 logged out. Waiting for processes to exit. Mar 25 02:38:40.507571 systemd-logind[1815]: Removed session 24. Mar 25 02:38:40.548001 sshd[7240]: Received disconnect from 218.92.0.221 port 21382:11: [preauth] Mar 25 02:38:40.548001 sshd[7240]: Disconnected from authenticating user root 218.92.0.221 port 21382 [preauth] Mar 25 02:38:40.549085 systemd[1]: sshd@35-86.109.11.215:22-218.92.0.221:21382.service: Deactivated successfully. Mar 25 02:38:40.611738 sshd[7275]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:40.725292 systemd[1]: Started sshd@43-86.109.11.215:22-218.92.0.221:32264.service - OpenSSH per-connection server daemon (218.92.0.221:32264). Mar 25 02:38:40.940512 sshd-session[7428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:42.209115 sshd-session[7429]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:42.876441 sshd[7275]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:43.040846 sshd[7275]: Received disconnect from 218.92.0.228 port 44178:11: [preauth] Mar 25 02:38:43.040846 sshd[7275]: Disconnected from authenticating user root 218.92.0.228 port 44178 [preauth] Mar 25 02:38:43.044308 systemd[1]: sshd@37-86.109.11.215:22-218.92.0.228:44178.service: Deactivated successfully. Mar 25 02:38:43.227991 systemd[1]: Started sshd@44-86.109.11.215:22-218.92.0.228:57656.service - OpenSSH per-connection server daemon (218.92.0.228:57656). Mar 25 02:38:44.084899 sshd[7426]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:44.377291 sshd-session[7438]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:44.453547 sshd-session[7439]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:45.533345 systemd[1]: Started sshd@45-86.109.11.215:22-139.178.68.195:47234.service - OpenSSH per-connection server daemon (139.178.68.195:47234). Mar 25 02:38:45.587195 sshd[7442]: Accepted publickey for core from 139.178.68.195 port 47234 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:45.588081 sshd-session[7442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:45.591763 systemd-logind[1815]: New session 25 of user core. Mar 25 02:38:45.610700 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 02:38:45.702989 sshd[7444]: Connection closed by 139.178.68.195 port 47234 Mar 25 02:38:45.703132 sshd-session[7442]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:45.704738 systemd[1]: sshd@45-86.109.11.215:22-139.178.68.195:47234.service: Deactivated successfully. Mar 25 02:38:45.705702 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 02:38:45.706386 systemd-logind[1815]: Session 25 logged out. Waiting for processes to exit. Mar 25 02:38:45.707077 systemd-logind[1815]: Removed session 25. Mar 25 02:38:46.528863 sshd[7426]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:46.605288 sshd[7433]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:46.821632 sshd-session[7469]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:46.939727 sshd-session[7470]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:49.248830 sshd[7426]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:49.366782 sshd[7433]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:49.394457 sshd[7426]: Received disconnect from 218.92.0.221 port 32264:11: [preauth] Mar 25 02:38:49.394457 sshd[7426]: Disconnected from authenticating user root 218.92.0.221 port 32264 [preauth] Mar 25 02:38:49.398148 systemd[1]: sshd@43-86.109.11.215:22-218.92.0.221:32264.service: Deactivated successfully. Mar 25 02:38:49.594660 systemd[1]: Started sshd@46-86.109.11.215:22-218.92.0.221:15070.service - OpenSSH per-connection server daemon (218.92.0.221:15070). Mar 25 02:38:49.699856 sshd-session[7474]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.228 user=root Mar 25 02:38:49.816928 sshd[7237]: Connection closed by authenticating user root 185.93.89.118 port 29610 [preauth] Mar 25 02:38:49.820177 systemd[1]: sshd@34-86.109.11.215:22-185.93.89.118:29610.service: Deactivated successfully. Mar 25 02:38:49.984159 systemd[1]: Started sshd@47-86.109.11.215:22-185.93.89.118:27578.service - OpenSSH per-connection server daemon (185.93.89.118:27578). Mar 25 02:38:50.721193 systemd[1]: Started sshd@48-86.109.11.215:22-139.178.68.195:47246.service - OpenSSH per-connection server daemon (139.178.68.195:47246). Mar 25 02:38:50.754347 sshd[7483]: Accepted publickey for core from 139.178.68.195 port 47246 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:50.755160 sshd-session[7483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:50.758641 systemd-logind[1815]: New session 26 of user core. Mar 25 02:38:50.773656 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 02:38:50.860668 sshd[7485]: Connection closed by 139.178.68.195 port 47246 Mar 25 02:38:50.860878 sshd-session[7483]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:50.862448 systemd[1]: sshd@48-86.109.11.215:22-139.178.68.195:47246.service: Deactivated successfully. Mar 25 02:38:50.863428 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 02:38:50.864155 systemd-logind[1815]: Session 26 logged out. Waiting for processes to exit. Mar 25 02:38:50.864803 systemd-logind[1815]: Removed session 26. Mar 25 02:38:51.204265 sshd[7433]: PAM: Permission denied for root from 218.92.0.228 Mar 25 02:38:51.238499 sshd-session[7510]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:51.368748 sshd[7433]: Received disconnect from 218.92.0.228 port 57656:11: [preauth] Mar 25 02:38:51.368748 sshd[7433]: Disconnected from authenticating user root 218.92.0.228 port 57656 [preauth] Mar 25 02:38:51.372308 systemd[1]: sshd@44-86.109.11.215:22-218.92.0.228:57656.service: Deactivated successfully. Mar 25 02:38:53.017816 sshd[7476]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:53.349307 sshd-session[7513]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:54.253929 containerd[1835]: time="2025-03-25T02:38:54.253897418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d1de52e6286727889cd52b32e2500cc7b7124937e22e4680a72b1c67d58414\" id:\"1d81da465845cbd24d8ad6bb93e77abf3f88121226d67207e7acdc8acb550d9c\" pid:7525 exited_at:{seconds:1742870334 nanos:253723085}" Mar 25 02:38:54.676924 containerd[1835]: time="2025-03-25T02:38:54.676886827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"800c48a27773d66aac8507e9615ed8b75e37fb8146da14d15b1624906f6a526c\" id:\"2fcceb1b9b3a1bead1fd6128cf367b800bd655874c2ba6b280e01dd00b476829\" pid:7548 exited_at:{seconds:1742870334 nanos:676679457}" Mar 25 02:38:55.068515 sshd[7476]: PAM: Permission denied for root from 218.92.0.221 Mar 25 02:38:55.401001 sshd-session[7567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.221 user=root Mar 25 02:38:55.882174 systemd[1]: Started sshd@49-86.109.11.215:22-139.178.68.195:34366.service - OpenSSH per-connection server daemon (139.178.68.195:34366). Mar 25 02:38:55.927199 sshd[7569]: Accepted publickey for core from 139.178.68.195 port 34366 ssh2: RSA SHA256:uTfG5sovPUqPrARqt2owfRXVYFyJtX+vlYONPCrvLPw Mar 25 02:38:55.928032 sshd-session[7569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:55.931365 systemd-logind[1815]: New session 27 of user core. Mar 25 02:38:55.948624 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 02:38:56.035571 sshd[7572]: Connection closed by 139.178.68.195 port 34366 Mar 25 02:38:56.035916 sshd-session[7569]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:56.037581 systemd[1]: sshd@49-86.109.11.215:22-139.178.68.195:34366.service: Deactivated successfully. Mar 25 02:38:56.038584 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 02:38:56.039277 systemd-logind[1815]: Session 27 logged out. Waiting for processes to exit. Mar 25 02:38:56.040014 systemd-logind[1815]: Removed session 27.