Apr 30 03:50:15.012666 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:50:15.012681 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:50:15.012688 kernel: BIOS-provided physical RAM map: Apr 30 03:50:15.012693 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 30 03:50:15.012696 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 30 03:50:15.012700 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 30 03:50:15.012705 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 30 03:50:15.012709 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 30 03:50:15.012713 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable Apr 30 03:50:15.012717 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS Apr 30 03:50:15.012721 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved Apr 30 03:50:15.012726 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable Apr 30 03:50:15.012730 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Apr 30 03:50:15.012735 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Apr 30 03:50:15.012740 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Apr 30 03:50:15.012745 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Apr 30 03:50:15.012750 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 30 03:50:15.012755 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 30 03:50:15.012759 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 03:50:15.012764 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 30 03:50:15.012768 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 30 03:50:15.012773 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 30 03:50:15.012777 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 30 03:50:15.012782 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 30 03:50:15.012786 kernel: NX (Execute Disable) protection: active Apr 30 03:50:15.012791 kernel: APIC: Static calls initialized Apr 30 03:50:15.012796 kernel: SMBIOS 3.2.1 present. Apr 30 03:50:15.012800 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Apr 30 03:50:15.012806 kernel: tsc: Detected 3400.000 MHz processor Apr 30 03:50:15.012810 kernel: tsc: Detected 3399.906 MHz TSC Apr 30 03:50:15.012815 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:50:15.012820 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:50:15.012825 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 30 03:50:15.012830 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Apr 30 03:50:15.012835 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:50:15.012839 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 30 03:50:15.012844 kernel: Using GB pages for direct mapping Apr 30 03:50:15.012849 kernel: ACPI: Early table checksum verification disabled Apr 30 03:50:15.012854 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 30 03:50:15.012859 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 30 03:50:15.012866 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Apr 30 03:50:15.012871 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 30 03:50:15.012876 kernel: ACPI: FACS 0x000000008C66DF80 000040 Apr 30 03:50:15.012881 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Apr 30 03:50:15.012887 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Apr 30 03:50:15.012892 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 30 03:50:15.012897 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 30 03:50:15.012902 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 30 03:50:15.012907 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 30 03:50:15.012912 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 30 03:50:15.012917 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 30 03:50:15.012923 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012928 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 30 03:50:15.012933 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 30 03:50:15.012938 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012943 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012948 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 30 03:50:15.012953 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 30 03:50:15.012958 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012963 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012969 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 30 03:50:15.012974 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Apr 30 03:50:15.012978 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 30 03:50:15.012984 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 30 03:50:15.012989 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 30 03:50:15.012994 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 30 03:50:15.012998 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 30 03:50:15.013003 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 30 03:50:15.013009 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 30 03:50:15.013014 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 30 03:50:15.013019 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 30 03:50:15.013024 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Apr 30 03:50:15.013029 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Apr 30 03:50:15.013034 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Apr 30 03:50:15.013039 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Apr 30 03:50:15.013044 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Apr 30 03:50:15.013049 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Apr 30 03:50:15.013055 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Apr 30 03:50:15.013060 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Apr 30 03:50:15.013065 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Apr 30 03:50:15.013070 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Apr 30 03:50:15.013075 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Apr 30 03:50:15.013080 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Apr 30 03:50:15.013085 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Apr 30 03:50:15.013090 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Apr 30 03:50:15.013095 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Apr 30 03:50:15.013100 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Apr 30 03:50:15.013105 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Apr 30 03:50:15.013110 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Apr 30 03:50:15.013115 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Apr 30 03:50:15.013120 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Apr 30 03:50:15.013125 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Apr 30 03:50:15.013130 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Apr 30 03:50:15.013135 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Apr 30 03:50:15.013140 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Apr 30 03:50:15.013145 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Apr 30 03:50:15.013150 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Apr 30 03:50:15.013155 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Apr 30 03:50:15.013160 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Apr 30 03:50:15.013165 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Apr 30 03:50:15.013170 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Apr 30 03:50:15.013175 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Apr 30 03:50:15.013180 kernel: No NUMA configuration found Apr 30 03:50:15.013185 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 30 03:50:15.013190 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 30 03:50:15.013196 kernel: Zone ranges: Apr 30 03:50:15.013201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:50:15.013206 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:50:15.013211 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 30 03:50:15.013216 kernel: Movable zone start for each node Apr 30 03:50:15.013221 kernel: Early memory node ranges Apr 30 03:50:15.013226 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 30 03:50:15.013231 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 30 03:50:15.013236 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] Apr 30 03:50:15.013242 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] Apr 30 03:50:15.013247 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Apr 30 03:50:15.013252 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 30 03:50:15.013257 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 30 03:50:15.013266 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 30 03:50:15.013272 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:50:15.013277 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 30 03:50:15.013282 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 30 03:50:15.013289 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 30 03:50:15.013294 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 30 03:50:15.013299 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Apr 30 03:50:15.013305 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 30 03:50:15.013310 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 30 03:50:15.013318 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 30 03:50:15.013324 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 30 03:50:15.013329 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 30 03:50:15.013335 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 30 03:50:15.013341 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 30 03:50:15.013346 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 30 03:50:15.013352 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 30 03:50:15.013357 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 30 03:50:15.013362 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 30 03:50:15.013368 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 30 03:50:15.013373 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 30 03:50:15.013378 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 30 03:50:15.013383 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 30 03:50:15.013390 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 30 03:50:15.013395 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 30 03:50:15.013400 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 30 03:50:15.013406 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 30 03:50:15.013411 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 30 03:50:15.013416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:50:15.013422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:50:15.013427 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:50:15.013432 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:50:15.013438 kernel: TSC deadline timer available Apr 30 03:50:15.013444 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 30 03:50:15.013449 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 30 03:50:15.013455 kernel: Booting paravirtualized kernel on bare hardware Apr 30 03:50:15.013460 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:50:15.013466 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 03:50:15.013471 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 03:50:15.013477 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 03:50:15.013482 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 03:50:15.013489 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:50:15.013494 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:50:15.013500 kernel: random: crng init done Apr 30 03:50:15.013505 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 30 03:50:15.013510 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 30 03:50:15.013516 kernel: Fallback order for Node 0: 0 Apr 30 03:50:15.013521 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Apr 30 03:50:15.013526 kernel: Policy zone: Normal Apr 30 03:50:15.013533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:50:15.013538 kernel: software IO TLB: area num 16. Apr 30 03:50:15.013543 kernel: Memory: 32720312K/33452984K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 732412K reserved, 0K cma-reserved) Apr 30 03:50:15.013549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 03:50:15.013554 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:50:15.013560 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:50:15.013565 kernel: Dynamic Preempt: voluntary Apr 30 03:50:15.013571 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:50:15.013576 kernel: rcu: RCU event tracing is enabled. Apr 30 03:50:15.013583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 03:50:15.013588 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:50:15.013594 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:50:15.013599 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:50:15.013604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:50:15.013610 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 03:50:15.013615 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 30 03:50:15.013620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:50:15.013626 kernel: Console: colour dummy device 80x25 Apr 30 03:50:15.013632 kernel: printk: console [tty0] enabled Apr 30 03:50:15.013637 kernel: printk: console [ttyS1] enabled Apr 30 03:50:15.013643 kernel: ACPI: Core revision 20230628 Apr 30 03:50:15.013648 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Apr 30 03:50:15.013653 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:50:15.013659 kernel: DMAR: Host address width 39 Apr 30 03:50:15.013664 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 30 03:50:15.013670 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 30 03:50:15.013675 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Apr 30 03:50:15.013680 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 30 03:50:15.013686 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 30 03:50:15.013692 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 30 03:50:15.013697 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 30 03:50:15.013702 kernel: x2apic enabled Apr 30 03:50:15.013708 kernel: APIC: Switched APIC routing to: cluster x2apic Apr 30 03:50:15.013713 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 30 03:50:15.013719 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 30 03:50:15.013724 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 30 03:50:15.013731 kernel: process: using mwait in idle threads Apr 30 03:50:15.013736 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:50:15.013741 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:50:15.013746 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:50:15.013752 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 03:50:15.013757 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 03:50:15.013762 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 30 03:50:15.013767 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:50:15.013773 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 30 03:50:15.013778 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 30 03:50:15.013783 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:50:15.013789 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:50:15.013795 kernel: TAA: Mitigation: TSX disabled Apr 30 03:50:15.013800 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 30 03:50:15.013805 kernel: SRBDS: Mitigation: Microcode Apr 30 03:50:15.013811 kernel: GDS: Mitigation: Microcode Apr 30 03:50:15.013816 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:50:15.013821 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:50:15.013826 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:50:15.013831 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 03:50:15.013837 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 03:50:15.013842 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:50:15.013848 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 03:50:15.013853 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 03:50:15.013859 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 30 03:50:15.013864 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:50:15.013869 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:50:15.013875 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:50:15.013880 kernel: landlock: Up and running. Apr 30 03:50:15.013885 kernel: SELinux: Initializing. Apr 30 03:50:15.013890 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.013896 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.013901 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 30 03:50:15.013906 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 03:50:15.013913 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 03:50:15.013918 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 03:50:15.013924 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 30 03:50:15.013929 kernel: ... version: 4 Apr 30 03:50:15.013934 kernel: ... bit width: 48 Apr 30 03:50:15.013939 kernel: ... generic registers: 4 Apr 30 03:50:15.013945 kernel: ... value mask: 0000ffffffffffff Apr 30 03:50:15.013950 kernel: ... max period: 00007fffffffffff Apr 30 03:50:15.013955 kernel: ... fixed-purpose events: 3 Apr 30 03:50:15.013962 kernel: ... event mask: 000000070000000f Apr 30 03:50:15.013967 kernel: signal: max sigframe size: 2032 Apr 30 03:50:15.013972 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 30 03:50:15.013978 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:50:15.013983 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:50:15.013988 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 30 03:50:15.013994 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:50:15.013999 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:50:15.014004 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Apr 30 03:50:15.014011 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:50:15.014016 kernel: smp: Brought up 1 node, 16 CPUs Apr 30 03:50:15.014021 kernel: smpboot: Max logical packages: 1 Apr 30 03:50:15.014027 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 30 03:50:15.014032 kernel: devtmpfs: initialized Apr 30 03:50:15.014038 kernel: x86/mm: Memory block size: 128MB Apr 30 03:50:15.014043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) Apr 30 03:50:15.014048 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Apr 30 03:50:15.014055 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:50:15.014060 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 03:50:15.014065 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:50:15.014071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:50:15.014076 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:50:15.014081 kernel: audit: type=2000 audit(1745985009.038:1): state=initialized audit_enabled=0 res=1 Apr 30 03:50:15.014086 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:50:15.014092 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:50:15.014097 kernel: cpuidle: using governor menu Apr 30 03:50:15.014103 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:50:15.014108 kernel: dca service started, version 1.12.1 Apr 30 03:50:15.014114 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 03:50:15.014119 kernel: PCI: Using configuration type 1 for base access Apr 30 03:50:15.014124 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 30 03:50:15.014130 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:50:15.014135 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:50:15.014140 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:50:15.014146 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:50:15.014152 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:50:15.014157 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:50:15.014162 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:50:15.014168 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:50:15.014173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:50:15.014178 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 30 03:50:15.014184 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014189 kernel: ACPI: SSDT 0xFFFF889540E3F400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 30 03:50:15.014194 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014200 kernel: ACPI: SSDT 0xFFFF889541E0B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 30 03:50:15.014206 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014211 kernel: ACPI: SSDT 0xFFFF889540DE4000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 30 03:50:15.014216 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014221 kernel: ACPI: SSDT 0xFFFF889541E0D000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 30 03:50:15.014227 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014232 kernel: ACPI: SSDT 0xFFFF889540E53000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 30 03:50:15.014237 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014243 kernel: ACPI: SSDT 0xFFFF889540E3B000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 30 03:50:15.014248 kernel: ACPI: _OSC evaluated successfully for all CPUs Apr 30 03:50:15.014254 kernel: ACPI: Interpreter enabled Apr 30 03:50:15.014259 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:50:15.014265 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:50:15.014270 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 30 03:50:15.014275 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 30 03:50:15.014280 kernel: HEST: Table parsing has been initialized. Apr 30 03:50:15.014286 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 30 03:50:15.014291 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:50:15.014296 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:50:15.014303 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 30 03:50:15.014308 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Apr 30 03:50:15.014314 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Apr 30 03:50:15.014339 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Apr 30 03:50:15.014345 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Apr 30 03:50:15.014364 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Apr 30 03:50:15.014369 kernel: ACPI: \_TZ_.FN00: New power resource Apr 30 03:50:15.014375 kernel: ACPI: \_TZ_.FN01: New power resource Apr 30 03:50:15.014380 kernel: ACPI: \_TZ_.FN02: New power resource Apr 30 03:50:15.014386 kernel: ACPI: \_TZ_.FN03: New power resource Apr 30 03:50:15.014392 kernel: ACPI: \_TZ_.FN04: New power resource Apr 30 03:50:15.014397 kernel: ACPI: \PIN_: New power resource Apr 30 03:50:15.014402 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 30 03:50:15.014475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:50:15.014529 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 30 03:50:15.014575 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 30 03:50:15.014585 kernel: PCI host bridge to bus 0000:00 Apr 30 03:50:15.014635 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:50:15.014678 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:50:15.014721 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:50:15.014762 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 30 03:50:15.014804 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 30 03:50:15.014845 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 30 03:50:15.014906 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 30 03:50:15.014962 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 30 03:50:15.015011 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.015063 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 30 03:50:15.015110 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 30 03:50:15.015161 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 30 03:50:15.015211 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 30 03:50:15.015264 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 30 03:50:15.015310 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 30 03:50:15.015361 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 30 03:50:15.015413 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 30 03:50:15.015461 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 30 03:50:15.015511 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 30 03:50:15.015562 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 30 03:50:15.015608 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 03:50:15.015662 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 30 03:50:15.015709 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 03:50:15.015761 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 30 03:50:15.015811 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 30 03:50:15.015859 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 30 03:50:15.015910 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 30 03:50:15.015964 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 30 03:50:15.016014 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 30 03:50:15.016064 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 30 03:50:15.016112 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 30 03:50:15.016161 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 30 03:50:15.016212 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 30 03:50:15.016259 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 30 03:50:15.016306 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 30 03:50:15.016391 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 30 03:50:15.016440 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 30 03:50:15.016486 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 30 03:50:15.016536 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 30 03:50:15.016583 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 30 03:50:15.016638 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 30 03:50:15.016686 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.016741 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 30 03:50:15.016793 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.016846 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 30 03:50:15.016895 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.016947 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 30 03:50:15.016996 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.017050 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Apr 30 03:50:15.017099 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.017149 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 30 03:50:15.017197 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 03:50:15.017247 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 30 03:50:15.017299 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 30 03:50:15.017374 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 30 03:50:15.017435 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 30 03:50:15.017490 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 30 03:50:15.017538 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 30 03:50:15.017592 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Apr 30 03:50:15.017641 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 30 03:50:15.017691 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 30 03:50:15.017742 kernel: pci 0000:01:00.0: PME# supported from D3cold Apr 30 03:50:15.017791 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 03:50:15.017839 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 03:50:15.017894 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Apr 30 03:50:15.017944 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 30 03:50:15.017992 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 30 03:50:15.018041 kernel: pci 0000:01:00.1: PME# supported from D3cold Apr 30 03:50:15.018092 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 03:50:15.018141 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 03:50:15.018188 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 03:50:15.018237 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 03:50:15.018283 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 03:50:15.018335 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 03:50:15.018388 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Apr 30 03:50:15.018441 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Apr 30 03:50:15.018491 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 30 03:50:15.018540 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Apr 30 03:50:15.018588 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 30 03:50:15.018637 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.018686 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 03:50:15.018733 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 03:50:15.018783 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 03:50:15.018837 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 30 03:50:15.018887 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 30 03:50:15.018937 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 30 03:50:15.018986 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Apr 30 03:50:15.019035 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 30 03:50:15.019084 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.019132 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 03:50:15.019182 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 03:50:15.019231 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 03:50:15.019278 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 03:50:15.019335 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Apr 30 03:50:15.019385 kernel: pci 0000:06:00.0: enabling Extended Tags Apr 30 03:50:15.019435 kernel: pci 0000:06:00.0: supports D1 D2 Apr 30 03:50:15.019484 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 03:50:15.019535 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 03:50:15.019583 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.019631 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.019686 kernel: pci_bus 0000:07: extended config space not accessible Apr 30 03:50:15.019742 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Apr 30 03:50:15.019794 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 30 03:50:15.019845 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 30 03:50:15.019898 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Apr 30 03:50:15.019950 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:50:15.020000 kernel: pci 0000:07:00.0: supports D1 D2 Apr 30 03:50:15.020052 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 03:50:15.020100 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 03:50:15.020149 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.020199 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.020210 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 30 03:50:15.020217 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 30 03:50:15.020222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 30 03:50:15.020228 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 30 03:50:15.020234 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 30 03:50:15.020240 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 30 03:50:15.020245 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 30 03:50:15.020251 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 30 03:50:15.020256 kernel: iommu: Default domain type: Translated Apr 30 03:50:15.020262 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:50:15.020269 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:50:15.020275 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:50:15.020280 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 30 03:50:15.020286 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] Apr 30 03:50:15.020291 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Apr 30 03:50:15.020297 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Apr 30 03:50:15.020302 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 30 03:50:15.020307 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 30 03:50:15.020376 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Apr 30 03:50:15.020432 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Apr 30 03:50:15.020483 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:50:15.020491 kernel: vgaarb: loaded Apr 30 03:50:15.020497 kernel: clocksource: Switched to clocksource tsc-early Apr 30 03:50:15.020503 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:50:15.020509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:50:15.020515 kernel: pnp: PnP ACPI init Apr 30 03:50:15.020565 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 30 03:50:15.020616 kernel: pnp 00:02: [dma 0 disabled] Apr 30 03:50:15.020665 kernel: pnp 00:03: [dma 0 disabled] Apr 30 03:50:15.020716 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 30 03:50:15.020760 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 30 03:50:15.020808 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 30 03:50:15.020855 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 30 03:50:15.020902 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 30 03:50:15.020946 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 30 03:50:15.020990 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 30 03:50:15.021036 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 30 03:50:15.021082 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 30 03:50:15.021126 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 30 03:50:15.021172 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 30 03:50:15.021222 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 30 03:50:15.021267 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 30 03:50:15.021310 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 30 03:50:15.021407 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 30 03:50:15.021450 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 30 03:50:15.021492 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 30 03:50:15.021535 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 30 03:50:15.021583 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 30 03:50:15.021592 kernel: pnp: PnP ACPI: found 10 devices Apr 30 03:50:15.021598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:50:15.021604 kernel: NET: Registered PF_INET protocol family Apr 30 03:50:15.021610 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:50:15.021616 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 30 03:50:15.021622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:50:15.021628 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:50:15.021635 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:50:15.021641 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 30 03:50:15.021646 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.021652 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.021659 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:50:15.021664 kernel: NET: Registered PF_XDP protocol family Apr 30 03:50:15.021713 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 30 03:50:15.021760 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 30 03:50:15.021810 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 30 03:50:15.021862 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 03:50:15.021911 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 03:50:15.021960 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 03:50:15.022009 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 03:50:15.022057 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 03:50:15.022105 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 03:50:15.022152 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 03:50:15.022199 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 03:50:15.022250 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 03:50:15.022296 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 03:50:15.022371 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 03:50:15.022432 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 03:50:15.022483 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 03:50:15.022530 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 03:50:15.022577 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 03:50:15.022626 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 03:50:15.022674 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.022723 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.022769 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 03:50:15.022817 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.022864 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.022912 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 30 03:50:15.022954 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:50:15.022997 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:50:15.023039 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:50:15.023081 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 30 03:50:15.023122 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 30 03:50:15.023171 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Apr 30 03:50:15.023215 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 03:50:15.023269 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Apr 30 03:50:15.023312 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Apr 30 03:50:15.023396 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 03:50:15.023441 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Apr 30 03:50:15.023488 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Apr 30 03:50:15.023533 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Apr 30 03:50:15.023583 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 30 03:50:15.023629 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 30 03:50:15.023637 kernel: PCI: CLS 64 bytes, default 64 Apr 30 03:50:15.023644 kernel: DMAR: No ATSR found Apr 30 03:50:15.023649 kernel: DMAR: No SATC found Apr 30 03:50:15.023655 kernel: DMAR: dmar0: Using Queued invalidation Apr 30 03:50:15.023701 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 30 03:50:15.023750 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 30 03:50:15.023800 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 30 03:50:15.023848 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 30 03:50:15.023895 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 30 03:50:15.023943 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 30 03:50:15.023989 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 30 03:50:15.024036 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 30 03:50:15.024082 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 30 03:50:15.024129 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 30 03:50:15.024179 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 30 03:50:15.024225 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 30 03:50:15.024273 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 30 03:50:15.024341 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 30 03:50:15.024405 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 30 03:50:15.024453 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 30 03:50:15.024501 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Apr 30 03:50:15.024547 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 30 03:50:15.024598 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 30 03:50:15.024644 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 30 03:50:15.024692 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 30 03:50:15.024740 kernel: pci 0000:01:00.0: Adding to iommu group 1 Apr 30 03:50:15.024789 kernel: pci 0000:01:00.1: Adding to iommu group 1 Apr 30 03:50:15.024838 kernel: pci 0000:03:00.0: Adding to iommu group 15 Apr 30 03:50:15.024886 kernel: pci 0000:04:00.0: Adding to iommu group 16 Apr 30 03:50:15.024935 kernel: pci 0000:06:00.0: Adding to iommu group 17 Apr 30 03:50:15.024986 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 30 03:50:15.024995 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 30 03:50:15.025001 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:50:15.025007 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Apr 30 03:50:15.025012 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 30 03:50:15.025018 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 30 03:50:15.025024 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 30 03:50:15.025029 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 30 03:50:15.025080 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 30 03:50:15.025091 kernel: Initialise system trusted keyrings Apr 30 03:50:15.025096 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 30 03:50:15.025102 kernel: Key type asymmetric registered Apr 30 03:50:15.025108 kernel: Asymmetric key parser 'x509' registered Apr 30 03:50:15.025113 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:50:15.025119 kernel: io scheduler mq-deadline registered Apr 30 03:50:15.025124 kernel: io scheduler kyber registered Apr 30 03:50:15.025130 kernel: io scheduler bfq registered Apr 30 03:50:15.025176 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 30 03:50:15.025227 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Apr 30 03:50:15.025274 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Apr 30 03:50:15.025345 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Apr 30 03:50:15.025409 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Apr 30 03:50:15.025458 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Apr 30 03:50:15.025511 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 30 03:50:15.025519 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 30 03:50:15.025527 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 30 03:50:15.025533 kernel: pstore: Using crash dump compression: deflate Apr 30 03:50:15.025539 kernel: pstore: Registered erst as persistent store backend Apr 30 03:50:15.025544 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:50:15.025550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:50:15.025556 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:50:15.025561 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:50:15.025567 kernel: hpet_acpi_add: no address or irqs in _CRS Apr 30 03:50:15.025617 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 30 03:50:15.025627 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:50:15.025672 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 30 03:50:15.025716 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 30 03:50:15.025760 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-04-30T03:50:13 UTC (1745985013) Apr 30 03:50:15.025803 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 30 03:50:15.025811 kernel: intel_pstate: Intel P-state driver initializing Apr 30 03:50:15.025817 kernel: intel_pstate: Disabling energy efficiency optimization Apr 30 03:50:15.025824 kernel: intel_pstate: HWP enabled Apr 30 03:50:15.025830 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Apr 30 03:50:15.025836 kernel: vesafb: scrolling: redraw Apr 30 03:50:15.025841 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Apr 30 03:50:15.025847 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000014f5c3b6, using 768k, total 768k Apr 30 03:50:15.025853 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:50:15.025858 kernel: fb0: VESA VGA frame buffer device Apr 30 03:50:15.025864 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:50:15.025870 kernel: Segment Routing with IPv6 Apr 30 03:50:15.025877 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:50:15.025882 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:50:15.025888 kernel: Key type dns_resolver registered Apr 30 03:50:15.025893 kernel: microcode: Current revision: 0x00000102 Apr 30 03:50:15.025899 kernel: microcode: Microcode Update Driver: v2.2. Apr 30 03:50:15.025905 kernel: IPI shorthand broadcast: enabled Apr 30 03:50:15.025910 kernel: sched_clock: Marking stable (2483172549, 1379259298)->(4405732549, -543300702) Apr 30 03:50:15.025916 kernel: registered taskstats version 1 Apr 30 03:50:15.025922 kernel: Loading compiled-in X.509 certificates Apr 30 03:50:15.025928 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:50:15.025934 kernel: Key type .fscrypt registered Apr 30 03:50:15.025939 kernel: Key type fscrypt-provisioning registered Apr 30 03:50:15.025945 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:50:15.025951 kernel: ima: No architecture policies found Apr 30 03:50:15.025956 kernel: clk: Disabling unused clocks Apr 30 03:50:15.025962 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:50:15.025968 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:50:15.025973 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:50:15.025980 kernel: Run /init as init process Apr 30 03:50:15.025985 kernel: with arguments: Apr 30 03:50:15.025991 kernel: /init Apr 30 03:50:15.025997 kernel: with environment: Apr 30 03:50:15.026002 kernel: HOME=/ Apr 30 03:50:15.026008 kernel: TERM=linux Apr 30 03:50:15.026013 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:50:15.026020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:50:15.026028 systemd[1]: Detected architecture x86-64. Apr 30 03:50:15.026034 systemd[1]: Running in initrd. Apr 30 03:50:15.026040 systemd[1]: No hostname configured, using default hostname. Apr 30 03:50:15.026046 systemd[1]: Hostname set to . Apr 30 03:50:15.026051 systemd[1]: Initializing machine ID from random generator. Apr 30 03:50:15.026057 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:50:15.026063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:50:15.026069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:50:15.026076 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:50:15.026082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:50:15.026088 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:50:15.026094 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:50:15.026100 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:50:15.026107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:50:15.026113 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Apr 30 03:50:15.026119 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Apr 30 03:50:15.026125 kernel: clocksource: Switched to clocksource tsc Apr 30 03:50:15.026131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:50:15.026137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:50:15.026143 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:50:15.026148 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:50:15.026155 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:50:15.026160 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:50:15.026167 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:50:15.026173 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:50:15.026179 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:50:15.026185 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:50:15.026191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:50:15.026197 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:50:15.026203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:50:15.026209 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:50:15.026214 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:50:15.026221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:50:15.026227 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:50:15.026233 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:50:15.026239 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:50:15.026255 systemd-journald[268]: Collecting audit messages is disabled. Apr 30 03:50:15.026270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:50:15.026277 systemd-journald[268]: Journal started Apr 30 03:50:15.026290 systemd-journald[268]: Runtime Journal (/run/log/journal/93244c3b1a684a1a9674416c963f9255) is 8.0M, max 639.9M, 631.9M free. Apr 30 03:50:15.049304 systemd-modules-load[270]: Inserted module 'overlay' Apr 30 03:50:15.080358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:50:15.122365 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:50:15.122382 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:50:15.141127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:50:15.141217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:50:15.141303 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:50:15.142291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:50:15.160268 systemd-modules-load[270]: Inserted module 'br_netfilter' Apr 30 03:50:15.160321 kernel: Bridge firewalling registered Apr 30 03:50:15.160679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:50:15.227855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:50:15.248024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:15.276091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:50:15.285707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:50:15.333613 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:50:15.347098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:50:15.348810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:50:15.356371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:50:15.356625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:50:15.357500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:50:15.360563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:15.361106 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:50:15.374693 systemd-resolved[307]: Positive Trust Anchors: Apr 30 03:50:15.374697 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:50:15.374721 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:50:15.376271 systemd-resolved[307]: Defaulting to hostname 'linux'. Apr 30 03:50:15.398612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:50:15.398676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:50:15.530402 dracut-cmdline[310]: dracut-dracut-053 Apr 30 03:50:15.537564 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:50:15.742369 kernel: SCSI subsystem initialized Apr 30 03:50:15.765367 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:50:15.788348 kernel: iscsi: registered transport (tcp) Apr 30 03:50:15.819821 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:50:15.819839 kernel: QLogic iSCSI HBA Driver Apr 30 03:50:15.853222 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:50:15.877589 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:50:15.934689 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:50:15.934709 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:50:15.954513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:50:16.013403 kernel: raid6: avx2x4 gen() 51897 MB/s Apr 30 03:50:16.045348 kernel: raid6: avx2x2 gen() 52352 MB/s Apr 30 03:50:16.082112 kernel: raid6: avx2x1 gen() 43942 MB/s Apr 30 03:50:16.082131 kernel: raid6: using algorithm avx2x2 gen() 52352 MB/s Apr 30 03:50:16.129978 kernel: raid6: .... xor() 30554 MB/s, rmw enabled Apr 30 03:50:16.129996 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:50:16.171380 kernel: xor: automatically using best checksumming function avx Apr 30 03:50:16.284351 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:50:16.290117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:50:16.311486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:50:16.318629 systemd-udevd[497]: Using default interface naming scheme 'v255'. Apr 30 03:50:16.322475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:50:16.353600 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:50:16.401304 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Apr 30 03:50:16.418957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:50:16.446611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:50:16.505943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:50:16.550961 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:50:16.550981 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:50:16.520472 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:50:16.567324 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:50:16.569705 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:50:16.630591 kernel: ACPI: bus type USB registered Apr 30 03:50:16.630603 kernel: usbcore: registered new interface driver usbfs Apr 30 03:50:16.630611 kernel: usbcore: registered new interface driver hub Apr 30 03:50:16.630618 kernel: usbcore: registered new device driver usb Apr 30 03:50:16.569742 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:16.683424 kernel: PTP clock support registered Apr 30 03:50:16.683461 kernel: libata version 3.00 loaded. Apr 30 03:50:16.683474 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 03:50:16.865707 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 30 03:50:16.865812 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:50:16.865822 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 30 03:50:16.865893 kernel: AES CTR mode by8 optimization enabled Apr 30 03:50:16.865902 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 03:50:16.865963 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 30 03:50:16.866024 kernel: ahci 0000:00:17.0: version 3.0 Apr 30 03:50:16.866088 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 30 03:50:16.866147 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Apr 30 03:50:16.866207 kernel: hub 1-0:1.0: USB hub found Apr 30 03:50:16.866275 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 30 03:50:16.866346 kernel: hub 1-0:1.0: 16 ports detected Apr 30 03:50:16.866407 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 30 03:50:16.866416 kernel: hub 2-0:1.0: USB hub found Apr 30 03:50:16.866481 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 30 03:50:16.866489 kernel: scsi host0: ahci Apr 30 03:50:16.866548 kernel: scsi host1: ahci Apr 30 03:50:16.866606 kernel: scsi host2: ahci Apr 30 03:50:16.866667 kernel: scsi host3: ahci Apr 30 03:50:16.866725 kernel: scsi host4: ahci Apr 30 03:50:16.866782 kernel: scsi host5: ahci Apr 30 03:50:16.866839 kernel: scsi host6: ahci Apr 30 03:50:16.866894 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Apr 30 03:50:16.866902 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Apr 30 03:50:16.866912 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Apr 30 03:50:16.866919 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Apr 30 03:50:16.866926 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Apr 30 03:50:16.866933 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Apr 30 03:50:16.866940 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Apr 30 03:50:16.866947 kernel: hub 2-0:1.0: 10 ports detected Apr 30 03:50:16.867005 kernel: igb 0000:03:00.0: added PHC on eth0 Apr 30 03:50:17.167336 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 30 03:50:17.167360 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 03:50:17.167437 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.167446 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d4 Apr 30 03:50:17.167511 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.167520 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Apr 30 03:50:17.167583 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.167591 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 03:50:17.167652 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:16.683430 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:50:17.480102 kernel: igb 0000:04:00.0: added PHC on eth1 Apr 30 03:50:17.480191 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 03:50:17.480200 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 03:50:17.480267 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.480276 kernel: hub 1-14:1.0: USB hub found Apr 30 03:50:17.480354 kernel: hub 1-14:1.0: 4 ports detected Apr 30 03:50:17.480423 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d5 Apr 30 03:50:17.480490 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 03:50:17.480499 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Apr 30 03:50:17.480562 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 03:50:17.480572 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 03:50:17.480633 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 03:50:17.480645 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Apr 30 03:50:18.001158 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 03:50:18.001182 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 03:50:18.001379 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 03:50:18.001393 kernel: ata2.00: Features: NCQ-prio Apr 30 03:50:18.001407 kernel: ata1.00: Features: NCQ-prio Apr 30 03:50:18.001418 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 30 03:50:18.001627 kernel: ata2.00: configured for UDMA/133 Apr 30 03:50:18.001642 kernel: ata1.00: configured for UDMA/133 Apr 30 03:50:18.001654 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 03:50:18.001850 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 03:50:18.002044 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Apr 30 03:50:18.002476 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Apr 30 03:50:18.002628 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 03:50:18.002642 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.002653 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 03:50:18.002820 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 03:50:18.002969 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:50:18.003110 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Apr 30 03:50:18.003260 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:50:18.003447 kernel: sd 1:0:0:0: [sdb] Write Protect is off Apr 30 03:50:18.003597 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 30 03:50:18.003753 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:50:18.003906 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 30 03:50:18.004054 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:50:18.004196 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:50:18.004210 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Apr 30 03:50:18.004392 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Apr 30 03:50:18.004562 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 03:50:18.004580 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 03:50:18.004772 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.004809 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Apr 30 03:50:18.004968 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:50:18.005135 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:50:18.005161 kernel: GPT:9289727 != 937703087 Apr 30 03:50:18.005178 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:50:18.005191 kernel: GPT:9289727 != 937703087 Apr 30 03:50:18.005205 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:50:18.005217 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.005235 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Apr 30 03:50:18.005493 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:50:18.005647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (545) Apr 30 03:50:18.005713 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Apr 30 03:50:18.692929 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (555) Apr 30 03:50:18.692941 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 03:50:18.693031 kernel: usbcore: registered new interface driver usbhid Apr 30 03:50:18.693040 kernel: usbhid: USB HID core driver Apr 30 03:50:18.693048 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 30 03:50:18.693055 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 30 03:50:18.693146 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 30 03:50:18.693155 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 30 03:50:18.693232 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.693241 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.693248 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.693255 kernel: GPT:disk_guids don't match. Apr 30 03:50:18.693263 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 03:50:18.693342 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:50:18.693350 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Apr 30 03:50:18.693417 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.693425 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.693432 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.693439 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:50:16.683479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:50:16.683546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:18.723425 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Apr 30 03:50:16.740385 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:50:16.953009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:50:18.753565 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Apr 30 03:50:17.429725 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:50:17.508428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:50:17.552264 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:50:18.785540 disk-uuid[703]: Primary Header is updated. Apr 30 03:50:18.785540 disk-uuid[703]: Secondary Entries is updated. Apr 30 03:50:18.785540 disk-uuid[703]: Secondary Header is updated. Apr 30 03:50:17.552291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:50:17.609408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:50:17.624519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:17.646671 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:50:18.170163 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Apr 30 03:50:18.184977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Apr 30 03:50:18.200002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 03:50:18.214492 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 03:50:18.269395 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 03:50:18.324464 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:50:18.340789 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:50:18.372329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:19.493719 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:19.513996 disk-uuid[704]: The operation has completed successfully. Apr 30 03:50:19.522440 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:19.553172 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:50:19.553220 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:50:19.603624 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:50:19.641416 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:50:19.641484 sh[747]: Success Apr 30 03:50:19.676394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:50:19.695234 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:50:19.706649 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:50:19.750000 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:50:19.750021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:19.772333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:50:19.792591 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:50:19.811637 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:50:19.851322 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:50:19.852782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:50:19.861618 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:50:19.877596 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:50:19.895928 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:50:19.999370 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:19.999383 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:19.999391 kernel: BTRFS info (device sdb6): using free space tree Apr 30 03:50:19.999398 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 03:50:19.999404 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 03:50:20.023407 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:20.038645 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:50:20.039252 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:50:20.091250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:50:20.123579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:50:20.134421 systemd-networkd[930]: lo: Link UP Apr 30 03:50:20.134423 systemd-networkd[930]: lo: Gained carrier Apr 30 03:50:20.150041 ignition[807]: Ignition 2.19.0 Apr 30 03:50:20.136795 systemd-networkd[930]: Enumeration completed Apr 30 03:50:20.150046 ignition[807]: Stage: fetch-offline Apr 30 03:50:20.136858 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:50:20.150063 ignition[807]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:20.137557 systemd-networkd[930]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.150068 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:20.152222 unknown[807]: fetched base config from "system" Apr 30 03:50:20.150119 ignition[807]: parsed url from cmdline: "" Apr 30 03:50:20.152227 unknown[807]: fetched user config from "system" Apr 30 03:50:20.150120 ignition[807]: no config URL provided Apr 30 03:50:20.153764 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:50:20.150123 ignition[807]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:50:20.160892 systemd[1]: Reached target network.target - Network. Apr 30 03:50:20.150145 ignition[807]: parsing config with SHA512: c142aa0eee95d3ad0ff765be0a335edd3d757a40309beeef5b262e395b1060dead7c3d91cbd6b17bb0476d3439dcc6ee2feaafb016152f480c158d6b7c885bf4 Apr 30 03:50:20.165198 systemd-networkd[930]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.152462 ignition[807]: fetch-offline: fetch-offline passed Apr 30 03:50:20.186490 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 03:50:20.152465 ignition[807]: POST message to Packet Timeline Apr 30 03:50:20.193875 systemd-networkd[930]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.152467 ignition[807]: POST Status error: resource requires networking Apr 30 03:50:20.194533 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:50:20.152502 ignition[807]: Ignition finished successfully Apr 30 03:50:20.209976 ignition[944]: Ignition 2.19.0 Apr 30 03:50:20.209981 ignition[944]: Stage: kargs Apr 30 03:50:20.415484 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 03:50:20.411330 systemd-networkd[930]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.210093 ignition[944]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:20.210099 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:20.210675 ignition[944]: kargs: kargs passed Apr 30 03:50:20.210678 ignition[944]: POST message to Packet Timeline Apr 30 03:50:20.210687 ignition[944]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:20.211166 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49688->[::1]:53: read: connection refused Apr 30 03:50:20.412226 ignition[944]: GET https://metadata.packet.net/metadata: attempt #2 Apr 30 03:50:20.412664 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55350->[::1]:53: read: connection refused Apr 30 03:50:20.708357 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 03:50:20.710085 systemd-networkd[930]: eno1: Link UP Apr 30 03:50:20.710244 systemd-networkd[930]: eno2: Link UP Apr 30 03:50:20.710400 systemd-networkd[930]: enp1s0f0np0: Link UP Apr 30 03:50:20.710576 systemd-networkd[930]: enp1s0f0np0: Gained carrier Apr 30 03:50:20.723595 systemd-networkd[930]: enp1s0f1np1: Link UP Apr 30 03:50:20.762676 systemd-networkd[930]: enp1s0f0np0: DHCPv4 address 147.75.90.203/31, gateway 147.75.90.202 acquired from 145.40.83.140 Apr 30 03:50:20.813135 ignition[944]: GET https://metadata.packet.net/metadata: attempt #3 Apr 30 03:50:20.814184 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33275->[::1]:53: read: connection refused Apr 30 03:50:21.431815 systemd-networkd[930]: enp1s0f1np1: Gained carrier Apr 30 03:50:21.614716 ignition[944]: GET https://metadata.packet.net/metadata: attempt #4 Apr 30 03:50:21.615916 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48034->[::1]:53: read: connection refused Apr 30 03:50:22.327925 systemd-networkd[930]: enp1s0f0np0: Gained IPv6LL Apr 30 03:50:23.217429 ignition[944]: GET https://metadata.packet.net/metadata: attempt #5 Apr 30 03:50:23.218676 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49032->[::1]:53: read: connection refused Apr 30 03:50:23.479924 systemd-networkd[930]: enp1s0f1np1: Gained IPv6LL Apr 30 03:50:26.421051 ignition[944]: GET https://metadata.packet.net/metadata: attempt #6 Apr 30 03:50:27.475442 ignition[944]: GET result: OK Apr 30 03:50:27.867219 ignition[944]: Ignition finished successfully Apr 30 03:50:27.872428 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:50:27.899592 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:50:27.906035 ignition[961]: Ignition 2.19.0 Apr 30 03:50:27.906039 ignition[961]: Stage: disks Apr 30 03:50:27.906146 ignition[961]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:27.906152 ignition[961]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:27.906671 ignition[961]: disks: disks passed Apr 30 03:50:27.906674 ignition[961]: POST message to Packet Timeline Apr 30 03:50:27.906682 ignition[961]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:29.290411 ignition[961]: GET result: OK Apr 30 03:50:29.656918 ignition[961]: Ignition finished successfully Apr 30 03:50:29.659832 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:50:29.675554 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:50:29.694596 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:50:29.715595 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:50:29.736633 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:50:29.754630 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:50:29.783593 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:50:29.817740 systemd-fsck[980]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:50:29.827994 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:50:29.853566 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:50:29.951329 kernel: EXT4-fs (sdb9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:50:29.951758 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:50:29.961748 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:50:29.997488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:50:30.028506 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (990) Apr 30 03:50:30.005919 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:50:30.129528 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:30.129542 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:30.129550 kernel: BTRFS info (device sdb6): using free space tree Apr 30 03:50:30.129557 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 03:50:30.129564 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 03:50:30.029021 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:50:30.146775 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Apr 30 03:50:30.168400 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:50:30.205460 coreos-metadata[992]: Apr 30 03:50:30.189 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 03:50:30.168417 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:50:30.202627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:50:30.262417 coreos-metadata[1008]: Apr 30 03:50:30.215 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 03:50:30.213474 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:50:30.237595 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:50:30.293418 initrd-setup-root[1022]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:50:30.304428 initrd-setup-root[1029]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:50:30.314450 initrd-setup-root[1036]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:50:30.325572 initrd-setup-root[1043]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:50:30.324638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:50:30.331538 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:50:30.370778 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:50:30.413529 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:30.405090 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:50:30.421509 ignition[1114]: INFO : Ignition 2.19.0 Apr 30 03:50:30.421509 ignition[1114]: INFO : Stage: mount Apr 30 03:50:30.421509 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:30.421509 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:30.421509 ignition[1114]: INFO : mount: mount passed Apr 30 03:50:30.421509 ignition[1114]: INFO : POST message to Packet Timeline Apr 30 03:50:30.421509 ignition[1114]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:30.430972 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:50:31.044468 coreos-metadata[992]: Apr 30 03:50:31.044 INFO Fetch successful Apr 30 03:50:31.123395 coreos-metadata[992]: Apr 30 03:50:31.123 INFO wrote hostname ci-4081.3.3-a-1bdc449bef to /sysroot/etc/hostname Apr 30 03:50:31.124737 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:50:31.198800 coreos-metadata[1008]: Apr 30 03:50:31.198 INFO Fetch successful Apr 30 03:50:31.272556 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 30 03:50:31.272638 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Apr 30 03:50:31.535075 ignition[1114]: INFO : GET result: OK Apr 30 03:50:31.874948 ignition[1114]: INFO : Ignition finished successfully Apr 30 03:50:31.877643 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:50:31.908566 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:50:31.912077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:50:31.987241 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1140) Apr 30 03:50:31.987273 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:32.007475 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:32.025285 kernel: BTRFS info (device sdb6): using free space tree Apr 30 03:50:32.062744 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 03:50:32.062765 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 03:50:32.075517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:50:32.108176 ignition[1157]: INFO : Ignition 2.19.0 Apr 30 03:50:32.108176 ignition[1157]: INFO : Stage: files Apr 30 03:50:32.123515 ignition[1157]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:32.123515 ignition[1157]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:32.123515 ignition[1157]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:50:32.123515 ignition[1157]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 03:50:32.112208 unknown[1157]: wrote ssh authorized keys file for user: core Apr 30 03:50:32.254388 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:50:32.279843 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:50:32.279843 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:50:32.881108 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:50:33.075578 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:33.075578 ignition[1157]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: files passed Apr 30 03:50:33.105625 ignition[1157]: INFO : POST message to Packet Timeline Apr 30 03:50:33.105625 ignition[1157]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:34.024762 ignition[1157]: INFO : GET result: OK Apr 30 03:50:34.384690 ignition[1157]: INFO : Ignition finished successfully Apr 30 03:50:34.387634 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:50:34.418566 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:50:34.418985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:50:34.447693 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:50:34.447755 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:50:34.499609 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:50:34.499609 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:50:34.470867 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:50:34.537752 initrd-setup-root-after-ignition[1198]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:50:34.491750 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:50:34.523857 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:50:34.592841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:50:34.592921 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:50:34.623056 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:50:34.632548 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:50:34.652781 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:50:34.660725 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:50:34.741425 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:50:34.774736 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:50:34.804751 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:50:34.816924 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:50:34.838015 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:50:34.855968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:50:34.856394 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:50:34.886032 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:50:34.907946 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:50:34.925942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:50:34.944939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:50:34.965950 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:50:34.986941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:50:35.006934 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:50:35.027975 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:50:35.049959 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:50:35.069930 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:50:35.087822 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:50:35.088226 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:50:35.114037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:50:35.133956 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:50:35.154806 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:50:35.155235 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:50:35.176819 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:50:35.177219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:50:35.208023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:50:35.208510 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:50:35.228131 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:50:35.246801 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:50:35.247201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:50:35.267941 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:50:35.285894 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:50:35.304845 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:50:35.305151 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:50:35.325916 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:50:35.326216 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:50:35.348691 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:50:35.458468 ignition[1219]: INFO : Ignition 2.19.0 Apr 30 03:50:35.458468 ignition[1219]: INFO : Stage: umount Apr 30 03:50:35.458468 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:35.458468 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:35.458468 ignition[1219]: INFO : umount: umount passed Apr 30 03:50:35.458468 ignition[1219]: INFO : POST message to Packet Timeline Apr 30 03:50:35.458468 ignition[1219]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:35.348858 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:50:35.368688 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:50:35.368846 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:50:35.386690 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:50:35.386853 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:50:35.419576 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:50:35.430995 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:50:35.448512 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:50:35.448615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:50:35.469592 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:50:35.469655 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:50:35.524935 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:50:35.525286 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:50:35.525339 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:50:35.530439 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:50:35.530485 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:50:36.401717 ignition[1219]: INFO : GET result: OK Apr 30 03:50:36.715949 ignition[1219]: INFO : Ignition finished successfully Apr 30 03:50:36.716959 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:50:36.717097 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:50:36.736139 systemd[1]: Stopped target network.target - Network. Apr 30 03:50:36.751563 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:50:36.751827 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:50:36.769769 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:50:36.769909 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:50:36.787817 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:50:36.787976 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:50:36.805804 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:50:36.805968 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:50:36.823793 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:50:36.823964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:50:36.842195 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:50:36.858481 systemd-networkd[930]: enp1s0f0np0: DHCPv6 lease lost Apr 30 03:50:36.859888 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:50:36.863511 systemd-networkd[930]: enp1s0f1np1: DHCPv6 lease lost Apr 30 03:50:36.879580 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:50:36.879943 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:50:36.898512 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:50:36.898860 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:50:36.919085 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:50:36.919201 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:50:36.952541 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:50:36.975479 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:50:36.975525 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:50:36.994642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:50:36.994738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:50:37.014708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:50:37.014861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:50:37.033741 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:50:37.033920 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:50:37.053988 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:50:37.075540 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:50:37.075909 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:50:37.105555 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:50:37.105701 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:50:37.112862 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:50:37.112970 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:50:37.140645 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:50:37.140807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:50:37.170982 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:50:37.171175 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:50:37.200814 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:50:37.201002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:37.242504 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:50:37.261384 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:50:37.261421 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:50:37.283437 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:50:37.283564 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:50:37.556432 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Apr 30 03:50:37.305681 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:50:37.305801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:50:37.324605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:50:37.324781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:37.347624 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:50:37.347892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:50:37.408599 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:50:37.408879 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:50:37.427540 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:50:37.460714 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:50:37.486668 systemd[1]: Switching root. Apr 30 03:50:37.661417 systemd-journald[268]: Journal stopped Apr 30 03:50:15.012666 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:50:15.012681 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:50:15.012688 kernel: BIOS-provided physical RAM map: Apr 30 03:50:15.012693 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 30 03:50:15.012696 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 30 03:50:15.012700 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 30 03:50:15.012705 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 30 03:50:15.012709 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 30 03:50:15.012713 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable Apr 30 03:50:15.012717 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS Apr 30 03:50:15.012721 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved Apr 30 03:50:15.012726 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable Apr 30 03:50:15.012730 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Apr 30 03:50:15.012735 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Apr 30 03:50:15.012740 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Apr 30 03:50:15.012745 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Apr 30 03:50:15.012750 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 30 03:50:15.012755 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 30 03:50:15.012759 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 03:50:15.012764 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 30 03:50:15.012768 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 30 03:50:15.012773 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 30 03:50:15.012777 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 30 03:50:15.012782 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 30 03:50:15.012786 kernel: NX (Execute Disable) protection: active Apr 30 03:50:15.012791 kernel: APIC: Static calls initialized Apr 30 03:50:15.012796 kernel: SMBIOS 3.2.1 present. Apr 30 03:50:15.012800 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Apr 30 03:50:15.012806 kernel: tsc: Detected 3400.000 MHz processor Apr 30 03:50:15.012810 kernel: tsc: Detected 3399.906 MHz TSC Apr 30 03:50:15.012815 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:50:15.012820 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:50:15.012825 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 30 03:50:15.012830 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Apr 30 03:50:15.012835 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:50:15.012839 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 30 03:50:15.012844 kernel: Using GB pages for direct mapping Apr 30 03:50:15.012849 kernel: ACPI: Early table checksum verification disabled Apr 30 03:50:15.012854 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 30 03:50:15.012859 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 30 03:50:15.012866 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Apr 30 03:50:15.012871 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 30 03:50:15.012876 kernel: ACPI: FACS 0x000000008C66DF80 000040 Apr 30 03:50:15.012881 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Apr 30 03:50:15.012887 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Apr 30 03:50:15.012892 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 30 03:50:15.012897 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 30 03:50:15.012902 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 30 03:50:15.012907 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 30 03:50:15.012912 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 30 03:50:15.012917 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 30 03:50:15.012923 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012928 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 30 03:50:15.012933 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 30 03:50:15.012938 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012943 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012948 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 30 03:50:15.012953 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 30 03:50:15.012958 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012963 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 30 03:50:15.012969 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 30 03:50:15.012974 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Apr 30 03:50:15.012978 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 30 03:50:15.012984 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 30 03:50:15.012989 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 30 03:50:15.012994 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 30 03:50:15.012998 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 30 03:50:15.013003 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 30 03:50:15.013009 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 30 03:50:15.013014 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 30 03:50:15.013019 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 30 03:50:15.013024 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Apr 30 03:50:15.013029 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Apr 30 03:50:15.013034 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Apr 30 03:50:15.013039 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Apr 30 03:50:15.013044 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Apr 30 03:50:15.013049 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Apr 30 03:50:15.013055 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Apr 30 03:50:15.013060 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Apr 30 03:50:15.013065 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Apr 30 03:50:15.013070 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Apr 30 03:50:15.013075 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Apr 30 03:50:15.013080 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Apr 30 03:50:15.013085 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Apr 30 03:50:15.013090 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Apr 30 03:50:15.013095 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Apr 30 03:50:15.013100 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Apr 30 03:50:15.013105 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Apr 30 03:50:15.013110 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Apr 30 03:50:15.013115 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Apr 30 03:50:15.013120 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Apr 30 03:50:15.013125 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Apr 30 03:50:15.013130 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Apr 30 03:50:15.013135 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Apr 30 03:50:15.013140 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Apr 30 03:50:15.013145 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Apr 30 03:50:15.013150 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Apr 30 03:50:15.013155 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Apr 30 03:50:15.013160 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Apr 30 03:50:15.013165 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Apr 30 03:50:15.013170 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Apr 30 03:50:15.013175 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Apr 30 03:50:15.013180 kernel: No NUMA configuration found Apr 30 03:50:15.013185 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 30 03:50:15.013190 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 30 03:50:15.013196 kernel: Zone ranges: Apr 30 03:50:15.013201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:50:15.013206 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 03:50:15.013211 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 30 03:50:15.013216 kernel: Movable zone start for each node Apr 30 03:50:15.013221 kernel: Early memory node ranges Apr 30 03:50:15.013226 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 30 03:50:15.013231 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 30 03:50:15.013236 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] Apr 30 03:50:15.013242 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] Apr 30 03:50:15.013247 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Apr 30 03:50:15.013252 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 30 03:50:15.013257 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 30 03:50:15.013266 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 30 03:50:15.013272 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:50:15.013277 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 30 03:50:15.013282 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 30 03:50:15.013289 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 30 03:50:15.013294 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 30 03:50:15.013299 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Apr 30 03:50:15.013305 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 30 03:50:15.013310 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 30 03:50:15.013318 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 30 03:50:15.013324 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 30 03:50:15.013329 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 30 03:50:15.013335 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 30 03:50:15.013341 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 30 03:50:15.013346 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 30 03:50:15.013352 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 30 03:50:15.013357 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 30 03:50:15.013362 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 30 03:50:15.013368 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 30 03:50:15.013373 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 30 03:50:15.013378 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 30 03:50:15.013383 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 30 03:50:15.013390 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 30 03:50:15.013395 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 30 03:50:15.013400 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 30 03:50:15.013406 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 30 03:50:15.013411 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 30 03:50:15.013416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:50:15.013422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:50:15.013427 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:50:15.013432 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:50:15.013438 kernel: TSC deadline timer available Apr 30 03:50:15.013444 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 30 03:50:15.013449 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 30 03:50:15.013455 kernel: Booting paravirtualized kernel on bare hardware Apr 30 03:50:15.013460 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:50:15.013466 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 03:50:15.013471 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 03:50:15.013477 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 03:50:15.013482 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 03:50:15.013489 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:50:15.013494 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:50:15.013500 kernel: random: crng init done Apr 30 03:50:15.013505 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 30 03:50:15.013510 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 30 03:50:15.013516 kernel: Fallback order for Node 0: 0 Apr 30 03:50:15.013521 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Apr 30 03:50:15.013526 kernel: Policy zone: Normal Apr 30 03:50:15.013533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:50:15.013538 kernel: software IO TLB: area num 16. Apr 30 03:50:15.013543 kernel: Memory: 32720312K/33452984K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 732412K reserved, 0K cma-reserved) Apr 30 03:50:15.013549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 03:50:15.013554 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:50:15.013560 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:50:15.013565 kernel: Dynamic Preempt: voluntary Apr 30 03:50:15.013571 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:50:15.013576 kernel: rcu: RCU event tracing is enabled. Apr 30 03:50:15.013583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 03:50:15.013588 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:50:15.013594 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:50:15.013599 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:50:15.013604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:50:15.013610 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 03:50:15.013615 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 30 03:50:15.013620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:50:15.013626 kernel: Console: colour dummy device 80x25 Apr 30 03:50:15.013632 kernel: printk: console [tty0] enabled Apr 30 03:50:15.013637 kernel: printk: console [ttyS1] enabled Apr 30 03:50:15.013643 kernel: ACPI: Core revision 20230628 Apr 30 03:50:15.013648 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Apr 30 03:50:15.013653 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:50:15.013659 kernel: DMAR: Host address width 39 Apr 30 03:50:15.013664 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 30 03:50:15.013670 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 30 03:50:15.013675 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Apr 30 03:50:15.013680 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 30 03:50:15.013686 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 30 03:50:15.013692 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 30 03:50:15.013697 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 30 03:50:15.013702 kernel: x2apic enabled Apr 30 03:50:15.013708 kernel: APIC: Switched APIC routing to: cluster x2apic Apr 30 03:50:15.013713 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 30 03:50:15.013719 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 30 03:50:15.013724 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 30 03:50:15.013731 kernel: process: using mwait in idle threads Apr 30 03:50:15.013736 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:50:15.013741 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:50:15.013746 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:50:15.013752 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 03:50:15.013757 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 03:50:15.013762 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 30 03:50:15.013767 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:50:15.013773 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 30 03:50:15.013778 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 30 03:50:15.013783 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:50:15.013789 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:50:15.013795 kernel: TAA: Mitigation: TSX disabled Apr 30 03:50:15.013800 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 30 03:50:15.013805 kernel: SRBDS: Mitigation: Microcode Apr 30 03:50:15.013811 kernel: GDS: Mitigation: Microcode Apr 30 03:50:15.013816 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:50:15.013821 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:50:15.013826 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:50:15.013831 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 03:50:15.013837 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 03:50:15.013842 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:50:15.013848 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 03:50:15.013853 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 03:50:15.013859 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 30 03:50:15.013864 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:50:15.013869 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:50:15.013875 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:50:15.013880 kernel: landlock: Up and running. Apr 30 03:50:15.013885 kernel: SELinux: Initializing. Apr 30 03:50:15.013890 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.013896 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.013901 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 30 03:50:15.013906 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 03:50:15.013913 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 03:50:15.013918 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 03:50:15.013924 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 30 03:50:15.013929 kernel: ... version: 4 Apr 30 03:50:15.013934 kernel: ... bit width: 48 Apr 30 03:50:15.013939 kernel: ... generic registers: 4 Apr 30 03:50:15.013945 kernel: ... value mask: 0000ffffffffffff Apr 30 03:50:15.013950 kernel: ... max period: 00007fffffffffff Apr 30 03:50:15.013955 kernel: ... fixed-purpose events: 3 Apr 30 03:50:15.013962 kernel: ... event mask: 000000070000000f Apr 30 03:50:15.013967 kernel: signal: max sigframe size: 2032 Apr 30 03:50:15.013972 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 30 03:50:15.013978 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:50:15.013983 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:50:15.013988 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 30 03:50:15.013994 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:50:15.013999 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:50:15.014004 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Apr 30 03:50:15.014011 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:50:15.014016 kernel: smp: Brought up 1 node, 16 CPUs Apr 30 03:50:15.014021 kernel: smpboot: Max logical packages: 1 Apr 30 03:50:15.014027 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 30 03:50:15.014032 kernel: devtmpfs: initialized Apr 30 03:50:15.014038 kernel: x86/mm: Memory block size: 128MB Apr 30 03:50:15.014043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) Apr 30 03:50:15.014048 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Apr 30 03:50:15.014055 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:50:15.014060 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 03:50:15.014065 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:50:15.014071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:50:15.014076 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:50:15.014081 kernel: audit: type=2000 audit(1745985009.038:1): state=initialized audit_enabled=0 res=1 Apr 30 03:50:15.014086 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:50:15.014092 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:50:15.014097 kernel: cpuidle: using governor menu Apr 30 03:50:15.014103 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:50:15.014108 kernel: dca service started, version 1.12.1 Apr 30 03:50:15.014114 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 03:50:15.014119 kernel: PCI: Using configuration type 1 for base access Apr 30 03:50:15.014124 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 30 03:50:15.014130 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:50:15.014135 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:50:15.014140 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:50:15.014146 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:50:15.014152 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:50:15.014157 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:50:15.014162 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:50:15.014168 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:50:15.014173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:50:15.014178 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 30 03:50:15.014184 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014189 kernel: ACPI: SSDT 0xFFFF889540E3F400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 30 03:50:15.014194 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014200 kernel: ACPI: SSDT 0xFFFF889541E0B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 30 03:50:15.014206 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014211 kernel: ACPI: SSDT 0xFFFF889540DE4000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 30 03:50:15.014216 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014221 kernel: ACPI: SSDT 0xFFFF889541E0D000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 30 03:50:15.014227 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014232 kernel: ACPI: SSDT 0xFFFF889540E53000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 30 03:50:15.014237 kernel: ACPI: Dynamic OEM Table Load: Apr 30 03:50:15.014243 kernel: ACPI: SSDT 0xFFFF889540E3B000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 30 03:50:15.014248 kernel: ACPI: _OSC evaluated successfully for all CPUs Apr 30 03:50:15.014254 kernel: ACPI: Interpreter enabled Apr 30 03:50:15.014259 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:50:15.014265 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:50:15.014270 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 30 03:50:15.014275 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 30 03:50:15.014280 kernel: HEST: Table parsing has been initialized. Apr 30 03:50:15.014286 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 30 03:50:15.014291 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:50:15.014296 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 03:50:15.014303 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 30 03:50:15.014308 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Apr 30 03:50:15.014314 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Apr 30 03:50:15.014339 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Apr 30 03:50:15.014345 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Apr 30 03:50:15.014364 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Apr 30 03:50:15.014369 kernel: ACPI: \_TZ_.FN00: New power resource Apr 30 03:50:15.014375 kernel: ACPI: \_TZ_.FN01: New power resource Apr 30 03:50:15.014380 kernel: ACPI: \_TZ_.FN02: New power resource Apr 30 03:50:15.014386 kernel: ACPI: \_TZ_.FN03: New power resource Apr 30 03:50:15.014392 kernel: ACPI: \_TZ_.FN04: New power resource Apr 30 03:50:15.014397 kernel: ACPI: \PIN_: New power resource Apr 30 03:50:15.014402 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 30 03:50:15.014475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:50:15.014529 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 30 03:50:15.014575 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 30 03:50:15.014585 kernel: PCI host bridge to bus 0000:00 Apr 30 03:50:15.014635 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:50:15.014678 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:50:15.014721 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:50:15.014762 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 30 03:50:15.014804 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 30 03:50:15.014845 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 30 03:50:15.014906 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 30 03:50:15.014962 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 30 03:50:15.015011 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.015063 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 30 03:50:15.015110 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 30 03:50:15.015161 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 30 03:50:15.015211 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 30 03:50:15.015264 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 30 03:50:15.015310 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 30 03:50:15.015361 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 30 03:50:15.015413 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 30 03:50:15.015461 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 30 03:50:15.015511 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 30 03:50:15.015562 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 30 03:50:15.015608 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 03:50:15.015662 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 30 03:50:15.015709 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 03:50:15.015761 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 30 03:50:15.015811 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 30 03:50:15.015859 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 30 03:50:15.015910 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 30 03:50:15.015964 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 30 03:50:15.016014 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 30 03:50:15.016064 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 30 03:50:15.016112 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 30 03:50:15.016161 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 30 03:50:15.016212 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 30 03:50:15.016259 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 30 03:50:15.016306 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 30 03:50:15.016391 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 30 03:50:15.016440 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 30 03:50:15.016486 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 30 03:50:15.016536 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 30 03:50:15.016583 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 30 03:50:15.016638 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 30 03:50:15.016686 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.016741 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 30 03:50:15.016793 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.016846 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 30 03:50:15.016895 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.016947 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 30 03:50:15.016996 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.017050 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Apr 30 03:50:15.017099 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.017149 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 30 03:50:15.017197 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 03:50:15.017247 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 30 03:50:15.017299 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 30 03:50:15.017374 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 30 03:50:15.017435 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 30 03:50:15.017490 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 30 03:50:15.017538 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 30 03:50:15.017592 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Apr 30 03:50:15.017641 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 30 03:50:15.017691 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 30 03:50:15.017742 kernel: pci 0000:01:00.0: PME# supported from D3cold Apr 30 03:50:15.017791 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 03:50:15.017839 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 03:50:15.017894 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Apr 30 03:50:15.017944 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 30 03:50:15.017992 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 30 03:50:15.018041 kernel: pci 0000:01:00.1: PME# supported from D3cold Apr 30 03:50:15.018092 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 03:50:15.018141 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 03:50:15.018188 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 03:50:15.018237 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 03:50:15.018283 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 03:50:15.018335 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 03:50:15.018388 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Apr 30 03:50:15.018441 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Apr 30 03:50:15.018491 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 30 03:50:15.018540 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Apr 30 03:50:15.018588 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 30 03:50:15.018637 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.018686 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 03:50:15.018733 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 03:50:15.018783 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 03:50:15.018837 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 30 03:50:15.018887 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 30 03:50:15.018937 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 30 03:50:15.018986 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Apr 30 03:50:15.019035 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 30 03:50:15.019084 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 30 03:50:15.019132 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 03:50:15.019182 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 03:50:15.019231 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 03:50:15.019278 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 03:50:15.019335 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Apr 30 03:50:15.019385 kernel: pci 0000:06:00.0: enabling Extended Tags Apr 30 03:50:15.019435 kernel: pci 0000:06:00.0: supports D1 D2 Apr 30 03:50:15.019484 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 03:50:15.019535 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 03:50:15.019583 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.019631 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.019686 kernel: pci_bus 0000:07: extended config space not accessible Apr 30 03:50:15.019742 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Apr 30 03:50:15.019794 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 30 03:50:15.019845 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 30 03:50:15.019898 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Apr 30 03:50:15.019950 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:50:15.020000 kernel: pci 0000:07:00.0: supports D1 D2 Apr 30 03:50:15.020052 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 03:50:15.020100 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 03:50:15.020149 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.020199 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.020210 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 30 03:50:15.020217 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 30 03:50:15.020222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 30 03:50:15.020228 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 30 03:50:15.020234 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 30 03:50:15.020240 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 30 03:50:15.020245 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 30 03:50:15.020251 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 30 03:50:15.020256 kernel: iommu: Default domain type: Translated Apr 30 03:50:15.020262 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:50:15.020269 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:50:15.020275 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:50:15.020280 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 30 03:50:15.020286 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] Apr 30 03:50:15.020291 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Apr 30 03:50:15.020297 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Apr 30 03:50:15.020302 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 30 03:50:15.020307 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 30 03:50:15.020376 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Apr 30 03:50:15.020432 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Apr 30 03:50:15.020483 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:50:15.020491 kernel: vgaarb: loaded Apr 30 03:50:15.020497 kernel: clocksource: Switched to clocksource tsc-early Apr 30 03:50:15.020503 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:50:15.020509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:50:15.020515 kernel: pnp: PnP ACPI init Apr 30 03:50:15.020565 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 30 03:50:15.020616 kernel: pnp 00:02: [dma 0 disabled] Apr 30 03:50:15.020665 kernel: pnp 00:03: [dma 0 disabled] Apr 30 03:50:15.020716 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 30 03:50:15.020760 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 30 03:50:15.020808 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 30 03:50:15.020855 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 30 03:50:15.020902 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 30 03:50:15.020946 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 30 03:50:15.020990 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 30 03:50:15.021036 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 30 03:50:15.021082 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 30 03:50:15.021126 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 30 03:50:15.021172 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 30 03:50:15.021222 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 30 03:50:15.021267 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 30 03:50:15.021310 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 30 03:50:15.021407 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 30 03:50:15.021450 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 30 03:50:15.021492 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 30 03:50:15.021535 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 30 03:50:15.021583 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 30 03:50:15.021592 kernel: pnp: PnP ACPI: found 10 devices Apr 30 03:50:15.021598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:50:15.021604 kernel: NET: Registered PF_INET protocol family Apr 30 03:50:15.021610 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:50:15.021616 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 30 03:50:15.021622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:50:15.021628 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:50:15.021635 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 03:50:15.021641 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 30 03:50:15.021646 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.021652 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:50:15.021659 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:50:15.021664 kernel: NET: Registered PF_XDP protocol family Apr 30 03:50:15.021713 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 30 03:50:15.021760 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 30 03:50:15.021810 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 30 03:50:15.021862 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 03:50:15.021911 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 03:50:15.021960 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 03:50:15.022009 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 03:50:15.022057 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 03:50:15.022105 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 03:50:15.022152 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 03:50:15.022199 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 03:50:15.022250 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 03:50:15.022296 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 03:50:15.022371 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 03:50:15.022432 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 03:50:15.022483 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 03:50:15.022530 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 03:50:15.022577 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 03:50:15.022626 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 03:50:15.022674 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.022723 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.022769 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 03:50:15.022817 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 03:50:15.022864 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 03:50:15.022912 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 30 03:50:15.022954 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:50:15.022997 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:50:15.023039 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:50:15.023081 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 30 03:50:15.023122 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 30 03:50:15.023171 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Apr 30 03:50:15.023215 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 03:50:15.023269 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Apr 30 03:50:15.023312 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Apr 30 03:50:15.023396 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 03:50:15.023441 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Apr 30 03:50:15.023488 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Apr 30 03:50:15.023533 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Apr 30 03:50:15.023583 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 30 03:50:15.023629 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 30 03:50:15.023637 kernel: PCI: CLS 64 bytes, default 64 Apr 30 03:50:15.023644 kernel: DMAR: No ATSR found Apr 30 03:50:15.023649 kernel: DMAR: No SATC found Apr 30 03:50:15.023655 kernel: DMAR: dmar0: Using Queued invalidation Apr 30 03:50:15.023701 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 30 03:50:15.023750 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 30 03:50:15.023800 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 30 03:50:15.023848 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 30 03:50:15.023895 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 30 03:50:15.023943 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 30 03:50:15.023989 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 30 03:50:15.024036 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 30 03:50:15.024082 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 30 03:50:15.024129 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 30 03:50:15.024179 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 30 03:50:15.024225 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 30 03:50:15.024273 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 30 03:50:15.024341 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 30 03:50:15.024405 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 30 03:50:15.024453 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 30 03:50:15.024501 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Apr 30 03:50:15.024547 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 30 03:50:15.024598 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 30 03:50:15.024644 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 30 03:50:15.024692 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 30 03:50:15.024740 kernel: pci 0000:01:00.0: Adding to iommu group 1 Apr 30 03:50:15.024789 kernel: pci 0000:01:00.1: Adding to iommu group 1 Apr 30 03:50:15.024838 kernel: pci 0000:03:00.0: Adding to iommu group 15 Apr 30 03:50:15.024886 kernel: pci 0000:04:00.0: Adding to iommu group 16 Apr 30 03:50:15.024935 kernel: pci 0000:06:00.0: Adding to iommu group 17 Apr 30 03:50:15.024986 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 30 03:50:15.024995 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 30 03:50:15.025001 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 03:50:15.025007 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Apr 30 03:50:15.025012 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 30 03:50:15.025018 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 30 03:50:15.025024 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 30 03:50:15.025029 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 30 03:50:15.025080 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 30 03:50:15.025091 kernel: Initialise system trusted keyrings Apr 30 03:50:15.025096 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 30 03:50:15.025102 kernel: Key type asymmetric registered Apr 30 03:50:15.025108 kernel: Asymmetric key parser 'x509' registered Apr 30 03:50:15.025113 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:50:15.025119 kernel: io scheduler mq-deadline registered Apr 30 03:50:15.025124 kernel: io scheduler kyber registered Apr 30 03:50:15.025130 kernel: io scheduler bfq registered Apr 30 03:50:15.025176 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 30 03:50:15.025227 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Apr 30 03:50:15.025274 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Apr 30 03:50:15.025345 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Apr 30 03:50:15.025409 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Apr 30 03:50:15.025458 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Apr 30 03:50:15.025511 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 30 03:50:15.025519 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 30 03:50:15.025527 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 30 03:50:15.025533 kernel: pstore: Using crash dump compression: deflate Apr 30 03:50:15.025539 kernel: pstore: Registered erst as persistent store backend Apr 30 03:50:15.025544 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:50:15.025550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:50:15.025556 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:50:15.025561 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 03:50:15.025567 kernel: hpet_acpi_add: no address or irqs in _CRS Apr 30 03:50:15.025617 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 30 03:50:15.025627 kernel: i8042: PNP: No PS/2 controller found. Apr 30 03:50:15.025672 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 30 03:50:15.025716 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 30 03:50:15.025760 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-04-30T03:50:13 UTC (1745985013) Apr 30 03:50:15.025803 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 30 03:50:15.025811 kernel: intel_pstate: Intel P-state driver initializing Apr 30 03:50:15.025817 kernel: intel_pstate: Disabling energy efficiency optimization Apr 30 03:50:15.025824 kernel: intel_pstate: HWP enabled Apr 30 03:50:15.025830 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Apr 30 03:50:15.025836 kernel: vesafb: scrolling: redraw Apr 30 03:50:15.025841 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Apr 30 03:50:15.025847 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000014f5c3b6, using 768k, total 768k Apr 30 03:50:15.025853 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 03:50:15.025858 kernel: fb0: VESA VGA frame buffer device Apr 30 03:50:15.025864 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:50:15.025870 kernel: Segment Routing with IPv6 Apr 30 03:50:15.025877 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:50:15.025882 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:50:15.025888 kernel: Key type dns_resolver registered Apr 30 03:50:15.025893 kernel: microcode: Current revision: 0x00000102 Apr 30 03:50:15.025899 kernel: microcode: Microcode Update Driver: v2.2. Apr 30 03:50:15.025905 kernel: IPI shorthand broadcast: enabled Apr 30 03:50:15.025910 kernel: sched_clock: Marking stable (2483172549, 1379259298)->(4405732549, -543300702) Apr 30 03:50:15.025916 kernel: registered taskstats version 1 Apr 30 03:50:15.025922 kernel: Loading compiled-in X.509 certificates Apr 30 03:50:15.025928 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:50:15.025934 kernel: Key type .fscrypt registered Apr 30 03:50:15.025939 kernel: Key type fscrypt-provisioning registered Apr 30 03:50:15.025945 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:50:15.025951 kernel: ima: No architecture policies found Apr 30 03:50:15.025956 kernel: clk: Disabling unused clocks Apr 30 03:50:15.025962 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:50:15.025968 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:50:15.025973 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:50:15.025980 kernel: Run /init as init process Apr 30 03:50:15.025985 kernel: with arguments: Apr 30 03:50:15.025991 kernel: /init Apr 30 03:50:15.025997 kernel: with environment: Apr 30 03:50:15.026002 kernel: HOME=/ Apr 30 03:50:15.026008 kernel: TERM=linux Apr 30 03:50:15.026013 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:50:15.026020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:50:15.026028 systemd[1]: Detected architecture x86-64. Apr 30 03:50:15.026034 systemd[1]: Running in initrd. Apr 30 03:50:15.026040 systemd[1]: No hostname configured, using default hostname. Apr 30 03:50:15.026046 systemd[1]: Hostname set to . Apr 30 03:50:15.026051 systemd[1]: Initializing machine ID from random generator. Apr 30 03:50:15.026057 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:50:15.026063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:50:15.026069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:50:15.026076 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:50:15.026082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:50:15.026088 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:50:15.026094 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:50:15.026100 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:50:15.026107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:50:15.026113 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Apr 30 03:50:15.026119 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Apr 30 03:50:15.026125 kernel: clocksource: Switched to clocksource tsc Apr 30 03:50:15.026131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:50:15.026137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:50:15.026143 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:50:15.026148 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:50:15.026155 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:50:15.026160 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:50:15.026167 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:50:15.026173 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:50:15.026179 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:50:15.026185 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:50:15.026191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:50:15.026197 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:50:15.026203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:50:15.026209 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:50:15.026214 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:50:15.026221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:50:15.026227 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:50:15.026233 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:50:15.026239 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:50:15.026255 systemd-journald[268]: Collecting audit messages is disabled. Apr 30 03:50:15.026270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:50:15.026277 systemd-journald[268]: Journal started Apr 30 03:50:15.026290 systemd-journald[268]: Runtime Journal (/run/log/journal/93244c3b1a684a1a9674416c963f9255) is 8.0M, max 639.9M, 631.9M free. Apr 30 03:50:15.049304 systemd-modules-load[270]: Inserted module 'overlay' Apr 30 03:50:15.080358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:50:15.122365 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:50:15.122382 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:50:15.141127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:50:15.141217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:50:15.141303 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:50:15.142291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:50:15.160268 systemd-modules-load[270]: Inserted module 'br_netfilter' Apr 30 03:50:15.160321 kernel: Bridge firewalling registered Apr 30 03:50:15.160679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:50:15.227855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:50:15.248024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:15.276091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:50:15.285707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:50:15.333613 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:50:15.347098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:50:15.348810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:50:15.356371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:50:15.356625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:50:15.357500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:50:15.360563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:15.361106 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:50:15.374693 systemd-resolved[307]: Positive Trust Anchors: Apr 30 03:50:15.374697 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:50:15.374721 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:50:15.376271 systemd-resolved[307]: Defaulting to hostname 'linux'. Apr 30 03:50:15.398612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:50:15.398676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:50:15.530402 dracut-cmdline[310]: dracut-dracut-053 Apr 30 03:50:15.537564 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:50:15.742369 kernel: SCSI subsystem initialized Apr 30 03:50:15.765367 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:50:15.788348 kernel: iscsi: registered transport (tcp) Apr 30 03:50:15.819821 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:50:15.819839 kernel: QLogic iSCSI HBA Driver Apr 30 03:50:15.853222 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:50:15.877589 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:50:15.934689 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:50:15.934709 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:50:15.954513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:50:16.013403 kernel: raid6: avx2x4 gen() 51897 MB/s Apr 30 03:50:16.045348 kernel: raid6: avx2x2 gen() 52352 MB/s Apr 30 03:50:16.082112 kernel: raid6: avx2x1 gen() 43942 MB/s Apr 30 03:50:16.082131 kernel: raid6: using algorithm avx2x2 gen() 52352 MB/s Apr 30 03:50:16.129978 kernel: raid6: .... xor() 30554 MB/s, rmw enabled Apr 30 03:50:16.129996 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:50:16.171380 kernel: xor: automatically using best checksumming function avx Apr 30 03:50:16.284351 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:50:16.290117 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:50:16.311486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:50:16.318629 systemd-udevd[497]: Using default interface naming scheme 'v255'. Apr 30 03:50:16.322475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:50:16.353600 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:50:16.401304 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Apr 30 03:50:16.418957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:50:16.446611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:50:16.505943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:50:16.550961 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 03:50:16.550981 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 03:50:16.520472 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:50:16.567324 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:50:16.569705 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:50:16.630591 kernel: ACPI: bus type USB registered Apr 30 03:50:16.630603 kernel: usbcore: registered new interface driver usbfs Apr 30 03:50:16.630611 kernel: usbcore: registered new interface driver hub Apr 30 03:50:16.630618 kernel: usbcore: registered new device driver usb Apr 30 03:50:16.569742 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:16.683424 kernel: PTP clock support registered Apr 30 03:50:16.683461 kernel: libata version 3.00 loaded. Apr 30 03:50:16.683474 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 03:50:16.865707 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 30 03:50:16.865812 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:50:16.865822 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 30 03:50:16.865893 kernel: AES CTR mode by8 optimization enabled Apr 30 03:50:16.865902 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 03:50:16.865963 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 30 03:50:16.866024 kernel: ahci 0000:00:17.0: version 3.0 Apr 30 03:50:16.866088 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 30 03:50:16.866147 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Apr 30 03:50:16.866207 kernel: hub 1-0:1.0: USB hub found Apr 30 03:50:16.866275 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 30 03:50:16.866346 kernel: hub 1-0:1.0: 16 ports detected Apr 30 03:50:16.866407 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 30 03:50:16.866416 kernel: hub 2-0:1.0: USB hub found Apr 30 03:50:16.866481 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 30 03:50:16.866489 kernel: scsi host0: ahci Apr 30 03:50:16.866548 kernel: scsi host1: ahci Apr 30 03:50:16.866606 kernel: scsi host2: ahci Apr 30 03:50:16.866667 kernel: scsi host3: ahci Apr 30 03:50:16.866725 kernel: scsi host4: ahci Apr 30 03:50:16.866782 kernel: scsi host5: ahci Apr 30 03:50:16.866839 kernel: scsi host6: ahci Apr 30 03:50:16.866894 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Apr 30 03:50:16.866902 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Apr 30 03:50:16.866912 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Apr 30 03:50:16.866919 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Apr 30 03:50:16.866926 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Apr 30 03:50:16.866933 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Apr 30 03:50:16.866940 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Apr 30 03:50:16.866947 kernel: hub 2-0:1.0: 10 ports detected Apr 30 03:50:16.867005 kernel: igb 0000:03:00.0: added PHC on eth0 Apr 30 03:50:17.167336 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 30 03:50:17.167360 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 03:50:17.167437 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.167446 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d4 Apr 30 03:50:17.167511 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.167520 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Apr 30 03:50:17.167583 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.167591 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 03:50:17.167652 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:16.683430 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:50:17.480102 kernel: igb 0000:04:00.0: added PHC on eth1 Apr 30 03:50:17.480191 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 03:50:17.480200 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 03:50:17.480267 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:50:17.480276 kernel: hub 1-14:1.0: USB hub found Apr 30 03:50:17.480354 kernel: hub 1-14:1.0: 4 ports detected Apr 30 03:50:17.480423 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:e6:d5 Apr 30 03:50:17.480490 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 03:50:17.480499 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Apr 30 03:50:17.480562 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 03:50:17.480572 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 03:50:17.480633 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 03:50:17.480645 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Apr 30 03:50:18.001158 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 03:50:18.001182 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 03:50:18.001379 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 03:50:18.001393 kernel: ata2.00: Features: NCQ-prio Apr 30 03:50:18.001407 kernel: ata1.00: Features: NCQ-prio Apr 30 03:50:18.001418 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 30 03:50:18.001627 kernel: ata2.00: configured for UDMA/133 Apr 30 03:50:18.001642 kernel: ata1.00: configured for UDMA/133 Apr 30 03:50:18.001654 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 03:50:18.001850 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 03:50:18.002044 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Apr 30 03:50:18.002476 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Apr 30 03:50:18.002628 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 03:50:18.002642 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.002653 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 03:50:18.002820 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 03:50:18.002969 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 03:50:18.003110 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Apr 30 03:50:18.003260 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:50:18.003447 kernel: sd 1:0:0:0: [sdb] Write Protect is off Apr 30 03:50:18.003597 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 30 03:50:18.003753 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:50:18.003906 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 30 03:50:18.004054 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:50:18.004196 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:50:18.004210 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Apr 30 03:50:18.004392 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Apr 30 03:50:18.004562 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 03:50:18.004580 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 03:50:18.004772 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.004809 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Apr 30 03:50:18.004968 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:50:18.005135 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:50:18.005161 kernel: GPT:9289727 != 937703087 Apr 30 03:50:18.005178 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:50:18.005191 kernel: GPT:9289727 != 937703087 Apr 30 03:50:18.005205 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:50:18.005217 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.005235 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Apr 30 03:50:18.005493 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:50:18.005647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (545) Apr 30 03:50:18.005713 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Apr 30 03:50:18.692929 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (555) Apr 30 03:50:18.692941 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 03:50:18.693031 kernel: usbcore: registered new interface driver usbhid Apr 30 03:50:18.693040 kernel: usbhid: USB HID core driver Apr 30 03:50:18.693048 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 30 03:50:18.693055 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 30 03:50:18.693146 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 30 03:50:18.693155 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 30 03:50:18.693232 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.693241 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.693248 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.693255 kernel: GPT:disk_guids don't match. Apr 30 03:50:18.693263 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 03:50:18.693342 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:50:18.693350 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Apr 30 03:50:18.693417 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.693425 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:18.693432 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:18.693439 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 03:50:16.683479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:50:16.683546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:18.723425 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Apr 30 03:50:16.740385 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:50:16.953009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:50:18.753565 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Apr 30 03:50:17.429725 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:50:17.508428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:50:17.552264 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:50:18.785540 disk-uuid[703]: Primary Header is updated. Apr 30 03:50:18.785540 disk-uuid[703]: Secondary Entries is updated. Apr 30 03:50:18.785540 disk-uuid[703]: Secondary Header is updated. Apr 30 03:50:17.552291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:50:17.609408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:50:17.624519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:17.646671 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:50:18.170163 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Apr 30 03:50:18.184977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Apr 30 03:50:18.200002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 03:50:18.214492 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 03:50:18.269395 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 03:50:18.324464 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:50:18.340789 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:50:18.372329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:19.493719 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 03:50:19.513996 disk-uuid[704]: The operation has completed successfully. Apr 30 03:50:19.522440 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 03:50:19.553172 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:50:19.553220 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:50:19.603624 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:50:19.641416 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:50:19.641484 sh[747]: Success Apr 30 03:50:19.676394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:50:19.695234 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:50:19.706649 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:50:19.750000 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:50:19.750021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:19.772333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:50:19.792591 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:50:19.811637 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:50:19.851322 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:50:19.852782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:50:19.861618 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:50:19.877596 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:50:19.895928 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:50:19.999370 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:19.999383 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:19.999391 kernel: BTRFS info (device sdb6): using free space tree Apr 30 03:50:19.999398 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 03:50:19.999404 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 03:50:20.023407 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:20.038645 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:50:20.039252 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:50:20.091250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:50:20.123579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:50:20.134421 systemd-networkd[930]: lo: Link UP Apr 30 03:50:20.134423 systemd-networkd[930]: lo: Gained carrier Apr 30 03:50:20.150041 ignition[807]: Ignition 2.19.0 Apr 30 03:50:20.136795 systemd-networkd[930]: Enumeration completed Apr 30 03:50:20.150046 ignition[807]: Stage: fetch-offline Apr 30 03:50:20.136858 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:50:20.150063 ignition[807]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:20.137557 systemd-networkd[930]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.150068 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:20.152222 unknown[807]: fetched base config from "system" Apr 30 03:50:20.150119 ignition[807]: parsed url from cmdline: "" Apr 30 03:50:20.152227 unknown[807]: fetched user config from "system" Apr 30 03:50:20.150120 ignition[807]: no config URL provided Apr 30 03:50:20.153764 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:50:20.150123 ignition[807]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:50:20.160892 systemd[1]: Reached target network.target - Network. Apr 30 03:50:20.150145 ignition[807]: parsing config with SHA512: c142aa0eee95d3ad0ff765be0a335edd3d757a40309beeef5b262e395b1060dead7c3d91cbd6b17bb0476d3439dcc6ee2feaafb016152f480c158d6b7c885bf4 Apr 30 03:50:20.165198 systemd-networkd[930]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.152462 ignition[807]: fetch-offline: fetch-offline passed Apr 30 03:50:20.186490 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 03:50:20.152465 ignition[807]: POST message to Packet Timeline Apr 30 03:50:20.193875 systemd-networkd[930]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.152467 ignition[807]: POST Status error: resource requires networking Apr 30 03:50:20.194533 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:50:20.152502 ignition[807]: Ignition finished successfully Apr 30 03:50:20.209976 ignition[944]: Ignition 2.19.0 Apr 30 03:50:20.209981 ignition[944]: Stage: kargs Apr 30 03:50:20.415484 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 03:50:20.411330 systemd-networkd[930]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:50:20.210093 ignition[944]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:20.210099 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:20.210675 ignition[944]: kargs: kargs passed Apr 30 03:50:20.210678 ignition[944]: POST message to Packet Timeline Apr 30 03:50:20.210687 ignition[944]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:20.211166 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49688->[::1]:53: read: connection refused Apr 30 03:50:20.412226 ignition[944]: GET https://metadata.packet.net/metadata: attempt #2 Apr 30 03:50:20.412664 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55350->[::1]:53: read: connection refused Apr 30 03:50:20.708357 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 03:50:20.710085 systemd-networkd[930]: eno1: Link UP Apr 30 03:50:20.710244 systemd-networkd[930]: eno2: Link UP Apr 30 03:50:20.710400 systemd-networkd[930]: enp1s0f0np0: Link UP Apr 30 03:50:20.710576 systemd-networkd[930]: enp1s0f0np0: Gained carrier Apr 30 03:50:20.723595 systemd-networkd[930]: enp1s0f1np1: Link UP Apr 30 03:50:20.762676 systemd-networkd[930]: enp1s0f0np0: DHCPv4 address 147.75.90.203/31, gateway 147.75.90.202 acquired from 145.40.83.140 Apr 30 03:50:20.813135 ignition[944]: GET https://metadata.packet.net/metadata: attempt #3 Apr 30 03:50:20.814184 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33275->[::1]:53: read: connection refused Apr 30 03:50:21.431815 systemd-networkd[930]: enp1s0f1np1: Gained carrier Apr 30 03:50:21.614716 ignition[944]: GET https://metadata.packet.net/metadata: attempt #4 Apr 30 03:50:21.615916 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48034->[::1]:53: read: connection refused Apr 30 03:50:22.327925 systemd-networkd[930]: enp1s0f0np0: Gained IPv6LL Apr 30 03:50:23.217429 ignition[944]: GET https://metadata.packet.net/metadata: attempt #5 Apr 30 03:50:23.218676 ignition[944]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49032->[::1]:53: read: connection refused Apr 30 03:50:23.479924 systemd-networkd[930]: enp1s0f1np1: Gained IPv6LL Apr 30 03:50:26.421051 ignition[944]: GET https://metadata.packet.net/metadata: attempt #6 Apr 30 03:50:27.475442 ignition[944]: GET result: OK Apr 30 03:50:27.867219 ignition[944]: Ignition finished successfully Apr 30 03:50:27.872428 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:50:27.899592 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:50:27.906035 ignition[961]: Ignition 2.19.0 Apr 30 03:50:27.906039 ignition[961]: Stage: disks Apr 30 03:50:27.906146 ignition[961]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:27.906152 ignition[961]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:27.906671 ignition[961]: disks: disks passed Apr 30 03:50:27.906674 ignition[961]: POST message to Packet Timeline Apr 30 03:50:27.906682 ignition[961]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:29.290411 ignition[961]: GET result: OK Apr 30 03:50:29.656918 ignition[961]: Ignition finished successfully Apr 30 03:50:29.659832 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:50:29.675554 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:50:29.694596 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:50:29.715595 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:50:29.736633 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:50:29.754630 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:50:29.783593 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:50:29.817740 systemd-fsck[980]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:50:29.827994 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:50:29.853566 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:50:29.951329 kernel: EXT4-fs (sdb9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:50:29.951758 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:50:29.961748 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:50:29.997488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:50:30.028506 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (990) Apr 30 03:50:30.005919 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:50:30.129528 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:30.129542 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:30.129550 kernel: BTRFS info (device sdb6): using free space tree Apr 30 03:50:30.129557 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 03:50:30.129564 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 03:50:30.029021 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:50:30.146775 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Apr 30 03:50:30.168400 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:50:30.205460 coreos-metadata[992]: Apr 30 03:50:30.189 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 03:50:30.168417 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:50:30.202627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:50:30.262417 coreos-metadata[1008]: Apr 30 03:50:30.215 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 03:50:30.213474 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:50:30.237595 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:50:30.293418 initrd-setup-root[1022]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:50:30.304428 initrd-setup-root[1029]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:50:30.314450 initrd-setup-root[1036]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:50:30.325572 initrd-setup-root[1043]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:50:30.324638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:50:30.331538 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:50:30.370778 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:50:30.413529 kernel: BTRFS info (device sdb6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:30.405090 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:50:30.421509 ignition[1114]: INFO : Ignition 2.19.0 Apr 30 03:50:30.421509 ignition[1114]: INFO : Stage: mount Apr 30 03:50:30.421509 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:30.421509 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:30.421509 ignition[1114]: INFO : mount: mount passed Apr 30 03:50:30.421509 ignition[1114]: INFO : POST message to Packet Timeline Apr 30 03:50:30.421509 ignition[1114]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:30.430972 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:50:31.044468 coreos-metadata[992]: Apr 30 03:50:31.044 INFO Fetch successful Apr 30 03:50:31.123395 coreos-metadata[992]: Apr 30 03:50:31.123 INFO wrote hostname ci-4081.3.3-a-1bdc449bef to /sysroot/etc/hostname Apr 30 03:50:31.124737 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:50:31.198800 coreos-metadata[1008]: Apr 30 03:50:31.198 INFO Fetch successful Apr 30 03:50:31.272556 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 30 03:50:31.272638 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Apr 30 03:50:31.535075 ignition[1114]: INFO : GET result: OK Apr 30 03:50:31.874948 ignition[1114]: INFO : Ignition finished successfully Apr 30 03:50:31.877643 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:50:31.908566 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:50:31.912077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:50:31.987241 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1140) Apr 30 03:50:31.987273 kernel: BTRFS info (device sdb6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:50:32.007475 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:50:32.025285 kernel: BTRFS info (device sdb6): using free space tree Apr 30 03:50:32.062744 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 03:50:32.062765 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 03:50:32.075517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:50:32.108176 ignition[1157]: INFO : Ignition 2.19.0 Apr 30 03:50:32.108176 ignition[1157]: INFO : Stage: files Apr 30 03:50:32.123515 ignition[1157]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:32.123515 ignition[1157]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:32.123515 ignition[1157]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:50:32.123515 ignition[1157]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:50:32.123515 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 03:50:32.112208 unknown[1157]: wrote ssh authorized keys file for user: core Apr 30 03:50:32.254388 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:50:32.279843 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:50:32.279843 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:32.312552 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:50:32.881108 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:50:33.075578 ignition[1157]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:50:33.075578 ignition[1157]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:50:33.105625 ignition[1157]: INFO : files: files passed Apr 30 03:50:33.105625 ignition[1157]: INFO : POST message to Packet Timeline Apr 30 03:50:33.105625 ignition[1157]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:34.024762 ignition[1157]: INFO : GET result: OK Apr 30 03:50:34.384690 ignition[1157]: INFO : Ignition finished successfully Apr 30 03:50:34.387634 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:50:34.418566 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:50:34.418985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:50:34.447693 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:50:34.447755 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:50:34.499609 initrd-setup-root-after-ignition[1194]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:50:34.499609 initrd-setup-root-after-ignition[1194]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:50:34.470867 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:50:34.537752 initrd-setup-root-after-ignition[1198]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:50:34.491750 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:50:34.523857 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:50:34.592841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:50:34.592921 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:50:34.623056 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:50:34.632548 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:50:34.652781 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:50:34.660725 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:50:34.741425 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:50:34.774736 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:50:34.804751 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:50:34.816924 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:50:34.838015 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:50:34.855968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:50:34.856394 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:50:34.886032 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:50:34.907946 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:50:34.925942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:50:34.944939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:50:34.965950 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:50:34.986941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:50:35.006934 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:50:35.027975 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:50:35.049959 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:50:35.069930 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:50:35.087822 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:50:35.088226 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:50:35.114037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:50:35.133956 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:50:35.154806 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:50:35.155235 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:50:35.176819 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:50:35.177219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:50:35.208023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:50:35.208510 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:50:35.228131 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:50:35.246801 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:50:35.247201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:50:35.267941 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:50:35.285894 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:50:35.304845 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:50:35.305151 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:50:35.325916 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:50:35.326216 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:50:35.348691 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:50:35.458468 ignition[1219]: INFO : Ignition 2.19.0 Apr 30 03:50:35.458468 ignition[1219]: INFO : Stage: umount Apr 30 03:50:35.458468 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:50:35.458468 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 03:50:35.458468 ignition[1219]: INFO : umount: umount passed Apr 30 03:50:35.458468 ignition[1219]: INFO : POST message to Packet Timeline Apr 30 03:50:35.458468 ignition[1219]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 03:50:35.348858 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:50:35.368688 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:50:35.368846 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:50:35.386690 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:50:35.386853 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:50:35.419576 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:50:35.430995 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:50:35.448512 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:50:35.448615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:50:35.469592 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:50:35.469655 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:50:35.524935 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:50:35.525286 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:50:35.525339 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:50:35.530439 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:50:35.530485 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:50:36.401717 ignition[1219]: INFO : GET result: OK Apr 30 03:50:36.715949 ignition[1219]: INFO : Ignition finished successfully Apr 30 03:50:36.716959 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:50:36.717097 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:50:36.736139 systemd[1]: Stopped target network.target - Network. Apr 30 03:50:36.751563 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:50:36.751827 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:50:36.769769 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:50:36.769909 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:50:36.787817 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:50:36.787976 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:50:36.805804 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:50:36.805968 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:50:36.823793 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:50:36.823964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:50:36.842195 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:50:36.858481 systemd-networkd[930]: enp1s0f0np0: DHCPv6 lease lost Apr 30 03:50:36.859888 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:50:36.863511 systemd-networkd[930]: enp1s0f1np1: DHCPv6 lease lost Apr 30 03:50:36.879580 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:50:36.879943 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:50:36.898512 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:50:36.898860 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:50:36.919085 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:50:36.919201 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:50:36.952541 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:50:36.975479 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:50:36.975525 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:50:36.994642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:50:36.994738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:50:37.014708 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:50:37.014861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:50:37.033741 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:50:37.033920 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:50:37.053988 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:50:37.075540 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:50:37.075909 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:50:37.105555 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:50:37.105701 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:50:37.112862 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:50:37.112970 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:50:37.140645 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:50:37.140807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:50:37.170982 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:50:37.171175 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:50:37.200814 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:50:37.201002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:50:37.242504 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:50:37.261384 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:50:37.261421 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:50:37.283437 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:50:37.283564 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:50:37.556432 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Apr 30 03:50:37.305681 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:50:37.305801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:50:37.324605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:50:37.324781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:37.347624 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:50:37.347892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:50:37.408599 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:50:37.408879 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:50:37.427540 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:50:37.460714 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:50:37.486668 systemd[1]: Switching root. Apr 30 03:50:37.661417 systemd-journald[268]: Journal stopped Apr 30 03:50:40.233249 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:50:40.233263 kernel: SELinux: policy capability open_perms=1 Apr 30 03:50:40.233270 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:50:40.233276 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:50:40.233281 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:50:40.233286 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:50:40.233292 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:50:40.233297 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:50:40.233302 kernel: audit: type=1403 audit(1745985037.860:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:50:40.233309 systemd[1]: Successfully loaded SELinux policy in 161.171ms. Apr 30 03:50:40.233321 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.859ms. Apr 30 03:50:40.233328 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:50:40.233334 systemd[1]: Detected architecture x86-64. Apr 30 03:50:40.233339 systemd[1]: Detected first boot. Apr 30 03:50:40.233346 systemd[1]: Hostname set to . Apr 30 03:50:40.233353 systemd[1]: Initializing machine ID from random generator. Apr 30 03:50:40.233360 zram_generator::config[1266]: No configuration found. Apr 30 03:50:40.233367 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:50:40.233373 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:50:40.233379 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:50:40.233385 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:50:40.233392 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:50:40.233399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:50:40.233405 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:50:40.233412 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:50:40.233418 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:50:40.233424 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:50:40.233431 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:50:40.233437 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:50:40.233444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:50:40.233451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:50:40.233457 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:50:40.233463 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:50:40.233470 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:50:40.233476 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:50:40.233482 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Apr 30 03:50:40.233489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:50:40.233496 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:50:40.233502 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:50:40.233508 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:50:40.233516 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:50:40.233523 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:50:40.233529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:50:40.233536 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:50:40.233543 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:50:40.233550 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:50:40.233556 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:50:40.233563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:50:40.233569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:50:40.233576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:50:40.233583 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:50:40.233590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:50:40.233596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:50:40.233603 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:50:40.233610 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:40.233616 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:50:40.233623 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:50:40.233630 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:50:40.233637 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:50:40.233644 systemd[1]: Reached target machines.target - Containers. Apr 30 03:50:40.233650 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:50:40.233657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:50:40.233663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:50:40.233670 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:50:40.233676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:50:40.233683 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:50:40.233690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:50:40.233697 kernel: ACPI: bus type drm_connector registered Apr 30 03:50:40.233703 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:50:40.233709 kernel: fuse: init (API version 7.39) Apr 30 03:50:40.233715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:50:40.233722 kernel: loop: module loaded Apr 30 03:50:40.233730 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:50:40.233737 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:50:40.233744 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:50:40.233751 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:50:40.233757 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:50:40.233764 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:50:40.233778 systemd-journald[1369]: Collecting audit messages is disabled. Apr 30 03:50:40.233793 systemd-journald[1369]: Journal started Apr 30 03:50:40.233807 systemd-journald[1369]: Runtime Journal (/run/log/journal/76313fe35b1d4040af1cc3ffb885a120) is 8.0M, max 639.9M, 631.9M free. Apr 30 03:50:38.360026 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:50:38.376125 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Apr 30 03:50:38.376385 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:50:40.261367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:50:40.297364 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:50:40.331370 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:50:40.366591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:50:40.400794 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:50:40.400823 systemd[1]: Stopped verity-setup.service. Apr 30 03:50:40.463363 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:40.484526 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:50:40.494893 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:50:40.505599 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:50:40.515593 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:50:40.525571 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:50:40.535597 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:50:40.546566 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:50:40.557686 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:50:40.568760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:50:40.579968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:50:40.580191 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:50:40.592188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:50:40.592567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:50:40.604279 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:50:40.604727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:50:40.615245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:50:40.615656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:50:40.627247 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:50:40.627758 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:50:40.638249 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:50:40.638768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:50:40.649287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:50:40.660231 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:50:40.672314 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:50:40.684347 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:50:40.719745 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:50:40.754707 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:50:40.768597 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:50:40.778588 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:50:40.778682 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:50:40.791528 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:50:40.812731 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:50:40.834199 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:50:40.843640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:50:40.845546 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:50:40.856391 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:50:40.867438 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:50:40.868059 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:50:40.871328 systemd-journald[1369]: Time spent on flushing to /var/log/journal/76313fe35b1d4040af1cc3ffb885a120 is 16.598ms for 1371 entries. Apr 30 03:50:40.871328 systemd-journald[1369]: System Journal (/var/log/journal/76313fe35b1d4040af1cc3ffb885a120) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:50:40.912639 systemd-journald[1369]: Received client request to flush runtime journal. Apr 30 03:50:40.885447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:50:40.886051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:50:40.894673 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:50:40.904172 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:50:40.915273 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:50:40.941321 kernel: loop0: detected capacity change from 0 to 218376 Apr 30 03:50:40.942171 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:50:40.954019 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Apr 30 03:50:40.954028 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Apr 30 03:50:40.970612 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:50:40.979321 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:50:40.989547 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:50:41.000569 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:50:41.011540 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:50:41.026210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:50:41.039370 kernel: loop1: detected capacity change from 0 to 8 Apr 30 03:50:41.048514 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:50:41.062296 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:50:41.097325 kernel: loop2: detected capacity change from 0 to 140768 Apr 30 03:50:41.098576 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:50:41.110071 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:50:41.119903 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:50:41.120321 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:50:41.133063 udevadm[1404]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:50:41.138622 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:50:41.163454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:50:41.180324 kernel: loop3: detected capacity change from 0 to 142488 Apr 30 03:50:41.185447 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Apr 30 03:50:41.185457 systemd-tmpfiles[1423]: ACLs are not supported, ignoring. Apr 30 03:50:41.190602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:50:41.198023 ldconfig[1395]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:50:41.202532 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:50:41.256503 kernel: loop4: detected capacity change from 0 to 218376 Apr 30 03:50:41.293942 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:50:41.303394 kernel: loop5: detected capacity change from 0 to 8 Apr 30 03:50:41.323324 kernel: loop6: detected capacity change from 0 to 140768 Apr 30 03:50:41.336473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:50:41.354320 kernel: loop7: detected capacity change from 0 to 142488 Apr 30 03:50:41.363081 systemd-udevd[1431]: Using default interface naming scheme 'v255'. Apr 30 03:50:41.366896 (sd-merge)[1429]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Apr 30 03:50:41.367128 (sd-merge)[1429]: Merged extensions into '/usr'. Apr 30 03:50:41.369251 systemd[1]: Reloading requested from client PID 1400 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:50:41.369257 systemd[1]: Reloading... Apr 30 03:50:41.411327 zram_generator::config[1487]: No configuration found. Apr 30 03:50:41.411401 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1520) Apr 30 03:50:41.447662 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Apr 30 03:50:41.447703 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:50:41.447726 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 03:50:41.496009 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:50:41.511222 kernel: IPMI message handler: version 39.2 Apr 30 03:50:41.514324 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Apr 30 03:50:41.597403 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:50:41.597421 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Apr 30 03:50:41.597524 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Apr 30 03:50:41.556882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:50:41.621450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 03:50:41.635358 kernel: iTCO_vendor_support: vendor-support=0 Apr 30 03:50:41.635406 kernel: ipmi device interface Apr 30 03:50:41.640510 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Apr 30 03:50:41.640769 systemd[1]: Reloading finished in 271 ms. Apr 30 03:50:41.669362 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Apr 30 03:50:41.669482 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Apr 30 03:50:41.723729 kernel: ipmi_si: IPMI System Interface driver Apr 30 03:50:41.723810 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Apr 30 03:50:41.769583 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Apr 30 03:50:41.769596 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Apr 30 03:50:41.769606 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Apr 30 03:50:41.841124 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Apr 30 03:50:41.841202 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Apr 30 03:50:41.841270 kernel: ipmi_si: Adding ACPI-specified kcs state machine Apr 30 03:50:41.841281 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Apr 30 03:50:41.890947 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Apr 30 03:50:41.901462 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Apr 30 03:50:41.901549 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Apr 30 03:50:41.955551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:50:41.966307 kernel: intel_rapl_common: Found RAPL domain package Apr 30 03:50:41.966342 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Apr 30 03:50:41.966436 kernel: intel_rapl_common: Found RAPL domain core Apr 30 03:50:42.009691 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Apr 30 03:50:42.009785 kernel: intel_rapl_common: Found RAPL domain dram Apr 30 03:50:42.053322 kernel: ipmi_ssif: IPMI SSIF Interface driver Apr 30 03:50:42.053529 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:50:42.074229 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:50:42.105429 systemd[1]: Starting ensure-sysext.service... Apr 30 03:50:42.112856 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:50:42.136414 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:50:42.142523 lvm[1609]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:50:42.148239 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:50:42.148837 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:50:42.149361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:50:42.167921 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:50:42.174781 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:50:42.174984 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:50:42.175515 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:50:42.175678 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Apr 30 03:50:42.175714 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Apr 30 03:50:42.177331 systemd-tmpfiles[1613]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:50:42.177334 systemd-tmpfiles[1613]: Skipping /boot Apr 30 03:50:42.181628 systemd-tmpfiles[1613]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:50:42.181632 systemd-tmpfiles[1613]: Skipping /boot Apr 30 03:50:42.187662 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:50:42.188805 systemd[1]: Reloading requested from client PID 1608 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:50:42.188811 systemd[1]: Reloading... Apr 30 03:50:42.226327 zram_generator::config[1646]: No configuration found. Apr 30 03:50:42.280245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:50:42.334336 systemd[1]: Reloading finished in 145 ms. Apr 30 03:50:42.361576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:50:42.372581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:50:42.386170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:50:42.402520 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:50:42.413333 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:50:42.419993 augenrules[1722]: No rules Apr 30 03:50:42.426093 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:50:42.449836 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:50:42.451781 lvm[1727]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:50:42.462440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:50:42.473941 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:50:42.485212 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:50:42.494988 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:50:42.504577 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:50:42.515648 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:50:42.525668 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:50:42.536625 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:50:42.541196 systemd-networkd[1612]: lo: Link UP Apr 30 03:50:42.541199 systemd-networkd[1612]: lo: Gained carrier Apr 30 03:50:42.543705 systemd-networkd[1612]: bond0: netdev ready Apr 30 03:50:42.544622 systemd-networkd[1612]: Enumeration completed Apr 30 03:50:42.547576 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:50:42.556641 systemd-networkd[1612]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:7b:88.network. Apr 30 03:50:42.561035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:42.561186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:50:42.570914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:50:42.581021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:50:42.594046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:50:42.598177 systemd-resolved[1729]: Positive Trust Anchors: Apr 30 03:50:42.598183 systemd-resolved[1729]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:50:42.598206 systemd-resolved[1729]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:50:42.600763 systemd-resolved[1729]: Using system hostname 'ci-4081.3.3-a-1bdc449bef'. Apr 30 03:50:42.603462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:50:42.616052 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:50:42.628055 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:50:42.637407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:50:42.637506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:42.638637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:50:42.638712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:50:42.649724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:50:42.649797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:50:42.660673 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:50:42.660743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:50:42.670700 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:50:42.682368 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:50:42.696588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:42.696705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:50:42.709477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:50:42.720942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:50:42.731953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:50:42.742431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:50:42.742501 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:50:42.742551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:42.743075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:50:42.743148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:50:42.754647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:50:42.754717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:50:42.766601 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:50:42.766670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:50:42.778632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:42.778757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:50:42.789507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:50:42.799967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:50:42.809916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:50:42.820964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:50:42.830447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:50:42.830525 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:50:42.830578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:50:42.831161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:50:42.831231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:50:42.843604 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:50:42.843675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:50:42.853623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:50:42.853693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:50:42.864590 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:50:42.864657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:50:42.875271 systemd[1]: Finished ensure-sysext.service. Apr 30 03:50:42.884750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:50:42.884780 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:50:42.895477 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:50:42.930245 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:50:42.941425 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:50:43.203397 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 03:50:43.226356 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Apr 30 03:50:43.226636 systemd-networkd[1612]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:7b:89.network. Apr 30 03:50:43.455368 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 03:50:43.477232 systemd-networkd[1612]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Apr 30 03:50:43.477395 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Apr 30 03:50:43.478694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:50:43.479189 systemd-networkd[1612]: enp1s0f0np0: Link UP Apr 30 03:50:43.479524 systemd-networkd[1612]: enp1s0f0np0: Gained carrier Apr 30 03:50:43.498525 systemd[1]: Reached target network.target - Network. Apr 30 03:50:43.500423 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 30 03:50:43.507749 systemd-networkd[1612]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:7b:88.network. Apr 30 03:50:43.508024 systemd-networkd[1612]: enp1s0f1np1: Link UP Apr 30 03:50:43.508324 systemd-networkd[1612]: enp1s0f1np1: Gained carrier Apr 30 03:50:43.508435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:50:43.520438 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:50:43.525702 systemd-networkd[1612]: bond0: Link UP Apr 30 03:50:43.526110 systemd-networkd[1612]: bond0: Gained carrier Apr 30 03:50:43.526404 systemd-timesyncd[1773]: Network configuration changed, trying to establish connection. Apr 30 03:50:43.530543 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:50:43.541585 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:50:43.552932 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:50:43.562851 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:50:43.574387 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:50:43.593147 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:50:43.593163 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:50:43.604365 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Apr 30 03:50:43.604384 kernel: bond0: active interface up! Apr 30 03:50:43.626399 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:50:43.634899 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:50:43.645141 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:50:43.655267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:50:43.664905 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:50:43.674445 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:50:43.684392 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:50:43.692457 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:50:43.692472 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:50:43.706430 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:50:43.725108 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:50:43.733374 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Apr 30 03:50:43.743065 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:50:43.751952 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:50:43.755787 coreos-metadata[1778]: Apr 30 03:50:43.755 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 03:50:43.762064 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:50:43.763462 dbus-daemon[1779]: [system] SELinux support is enabled Apr 30 03:50:43.763952 jq[1782]: false Apr 30 03:50:43.771422 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:50:43.772081 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:50:43.780321 extend-filesystems[1784]: Found loop4 Apr 30 03:50:43.780321 extend-filesystems[1784]: Found loop5 Apr 30 03:50:43.846416 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Apr 30 03:50:43.846432 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1504) Apr 30 03:50:43.846450 extend-filesystems[1784]: Found loop6 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found loop7 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sda Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb1 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb2 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb3 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found usr Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb4 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb6 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb7 Apr 30 03:50:43.846450 extend-filesystems[1784]: Found sdb9 Apr 30 03:50:43.846450 extend-filesystems[1784]: Checking size of /dev/sdb9 Apr 30 03:50:43.846450 extend-filesystems[1784]: Resized partition /dev/sdb9 Apr 30 03:50:43.783202 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:50:44.002582 extend-filesystems[1794]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:50:43.829599 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:50:43.847115 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:50:43.867812 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:50:43.886205 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Apr 30 03:50:43.908129 systemd-logind[1804]: Watching system buttons on /dev/input/event3 (Power Button) Apr 30 03:50:44.010974 update_engine[1809]: I20250430 03:50:43.933389 1809 main.cc:92] Flatcar Update Engine starting Apr 30 03:50:44.010974 update_engine[1809]: I20250430 03:50:43.934031 1809 update_check_scheduler.cc:74] Next update check in 5m29s Apr 30 03:50:43.908139 systemd-logind[1804]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 03:50:44.011137 jq[1810]: true Apr 30 03:50:43.908149 systemd-logind[1804]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Apr 30 03:50:43.908393 systemd-logind[1804]: New seat seat0. Apr 30 03:50:43.911648 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:50:43.924385 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:50:43.938929 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:50:43.960579 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:50:43.978547 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:50:44.009498 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:50:44.009596 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:50:44.009745 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:50:44.009837 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:50:44.028586 sshd_keygen[1807]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:50:44.041624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:50:44.041718 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:50:44.053541 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:50:44.066251 (ntainerd)[1822]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:50:44.067896 jq[1821]: true Apr 30 03:50:44.069419 dbus-daemon[1779]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:50:44.070818 tar[1812]: linux-amd64/LICENSE Apr 30 03:50:44.070935 tar[1812]: linux-amd64/helm Apr 30 03:50:44.076620 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Apr 30 03:50:44.076720 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Apr 30 03:50:44.078671 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:50:44.090599 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:50:44.098387 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:50:44.098485 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:50:44.109472 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:50:44.109553 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:50:44.117304 bash[1849]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:50:44.127496 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:50:44.139296 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:50:44.150650 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:50:44.150747 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:50:44.151017 locksmithd[1857]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:50:44.171531 systemd[1]: Starting sshkeys.service... Apr 30 03:50:44.179172 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:50:44.191201 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:50:44.203205 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:50:44.214669 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:50:44.226062 coreos-metadata[1871]: Apr 30 03:50:44.226 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 03:50:44.227921 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:50:44.237231 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Apr 30 03:50:44.238614 containerd[1822]: time="2025-04-30T03:50:44.238553190Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:50:44.246534 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:50:44.251607 containerd[1822]: time="2025-04-30T03:50:44.251585333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252369 containerd[1822]: time="2025-04-30T03:50:44.252351570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252369 containerd[1822]: time="2025-04-30T03:50:44.252366994Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:50:44.252415 containerd[1822]: time="2025-04-30T03:50:44.252376753Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:50:44.252470 containerd[1822]: time="2025-04-30T03:50:44.252462188Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:50:44.252487 containerd[1822]: time="2025-04-30T03:50:44.252472464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252514 containerd[1822]: time="2025-04-30T03:50:44.252505945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252529 containerd[1822]: time="2025-04-30T03:50:44.252515094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252706 containerd[1822]: time="2025-04-30T03:50:44.252616863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252706 containerd[1822]: time="2025-04-30T03:50:44.252626408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252706 containerd[1822]: time="2025-04-30T03:50:44.252633799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252706 containerd[1822]: time="2025-04-30T03:50:44.252639288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252706 containerd[1822]: time="2025-04-30T03:50:44.252678913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252803 containerd[1822]: time="2025-04-30T03:50:44.252792333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252858 containerd[1822]: time="2025-04-30T03:50:44.252849462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:50:44.252874 containerd[1822]: time="2025-04-30T03:50:44.252858347Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:50:44.252910 containerd[1822]: time="2025-04-30T03:50:44.252903467Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:50:44.252937 containerd[1822]: time="2025-04-30T03:50:44.252930749Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:50:44.264432 containerd[1822]: time="2025-04-30T03:50:44.264421911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:50:44.264461 containerd[1822]: time="2025-04-30T03:50:44.264443022Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:50:44.264461 containerd[1822]: time="2025-04-30T03:50:44.264452836Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:50:44.264491 containerd[1822]: time="2025-04-30T03:50:44.264461409Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:50:44.264491 containerd[1822]: time="2025-04-30T03:50:44.264469840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:50:44.264576 containerd[1822]: time="2025-04-30T03:50:44.264543151Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:50:44.264705 containerd[1822]: time="2025-04-30T03:50:44.264666116Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:50:44.264738 containerd[1822]: time="2025-04-30T03:50:44.264720538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:50:44.264738 containerd[1822]: time="2025-04-30T03:50:44.264731782Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:50:44.264767 containerd[1822]: time="2025-04-30T03:50:44.264739488Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:50:44.264767 containerd[1822]: time="2025-04-30T03:50:44.264747614Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264767 containerd[1822]: time="2025-04-30T03:50:44.264754567Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264767 containerd[1822]: time="2025-04-30T03:50:44.264761335Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264819 containerd[1822]: time="2025-04-30T03:50:44.264770989Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264819 containerd[1822]: time="2025-04-30T03:50:44.264778921Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264819 containerd[1822]: time="2025-04-30T03:50:44.264785891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264819 containerd[1822]: time="2025-04-30T03:50:44.264792362Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264819 containerd[1822]: time="2025-04-30T03:50:44.264798782Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:50:44.264819 containerd[1822]: time="2025-04-30T03:50:44.264810048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264819 containerd[1822]: time="2025-04-30T03:50:44.264817558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264827616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264835314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264842012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264849497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264856132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264862917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264869997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264878046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264884305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264890566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264897642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.264908 containerd[1822]: time="2025-04-30T03:50:44.264905805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264916703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264923169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264931323Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264955953Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264967598Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264973729Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264980432Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.264997586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.265005194Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.265010897Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:50:44.265065 containerd[1822]: time="2025-04-30T03:50:44.265016411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:50:44.265213 containerd[1822]: time="2025-04-30T03:50:44.265183448Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:50:44.265286 containerd[1822]: time="2025-04-30T03:50:44.265222506Z" level=info msg="Connect containerd service" Apr 30 03:50:44.265286 containerd[1822]: time="2025-04-30T03:50:44.265240899Z" level=info msg="using legacy CRI server" Apr 30 03:50:44.265286 containerd[1822]: time="2025-04-30T03:50:44.265245300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:50:44.265341 containerd[1822]: time="2025-04-30T03:50:44.265292842Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:50:44.265916 containerd[1822]: time="2025-04-30T03:50:44.265870010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:50:44.266015 containerd[1822]: time="2025-04-30T03:50:44.265953778Z" level=info msg="Start subscribing containerd event" Apr 30 03:50:44.266015 containerd[1822]: time="2025-04-30T03:50:44.265981749Z" level=info msg="Start recovering state" Apr 30 03:50:44.266056 containerd[1822]: time="2025-04-30T03:50:44.266035928Z" level=info msg="Start event monitor" Apr 30 03:50:44.266056 containerd[1822]: time="2025-04-30T03:50:44.266039111Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:50:44.266089 containerd[1822]: time="2025-04-30T03:50:44.266044519Z" level=info msg="Start snapshots syncer" Apr 30 03:50:44.266089 containerd[1822]: time="2025-04-30T03:50:44.266070139Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:50:44.266089 containerd[1822]: time="2025-04-30T03:50:44.266077238Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:50:44.266141 containerd[1822]: time="2025-04-30T03:50:44.266081904Z" level=info msg="Start streaming server" Apr 30 03:50:44.266160 containerd[1822]: time="2025-04-30T03:50:44.266142618Z" level=info msg="containerd successfully booted in 0.028030s" Apr 30 03:50:44.266179 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:50:44.354355 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Apr 30 03:50:44.379816 extend-filesystems[1794]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Apr 30 03:50:44.379816 extend-filesystems[1794]: old_desc_blocks = 1, new_desc_blocks = 56 Apr 30 03:50:44.379816 extend-filesystems[1794]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Apr 30 03:50:44.419416 extend-filesystems[1784]: Resized filesystem in /dev/sdb9 Apr 30 03:50:44.427395 tar[1812]: linux-amd64/README.md Apr 30 03:50:44.380736 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:50:44.380835 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:50:44.431615 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:50:45.239438 systemd-networkd[1612]: bond0: Gained IPv6LL Apr 30 03:50:45.240714 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:50:45.252139 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:50:45.270614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:50:45.281423 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:50:45.301837 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:50:45.988875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:50:45.999901 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:50:46.413252 kubelet[1914]: E0430 03:50:46.413126 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:50:46.414130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:50:46.414209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:50:46.687394 systemd-timesyncd[1773]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Apr 30 03:50:46.687415 systemd-timesyncd[1773]: Initial clock synchronization to Wed 2025-04-30 03:50:47.034356 UTC. Apr 30 03:50:47.185576 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Apr 30 03:50:47.186050 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Apr 30 03:50:47.295933 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:50:47.313611 systemd[1]: Started sshd@0-147.75.90.203:22-139.178.68.195:59838.service - OpenSSH per-connection server daemon (139.178.68.195:59838). Apr 30 03:50:47.353598 sshd[1935]: Accepted publickey for core from 139.178.68.195 port 59838 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:47.354799 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:47.360868 systemd-logind[1804]: New session 1 of user core. Apr 30 03:50:47.361889 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:50:47.385818 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:50:47.399230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:50:47.422577 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:50:47.432959 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:50:47.522591 systemd[1939]: Queued start job for default target default.target. Apr 30 03:50:47.531914 systemd[1939]: Created slice app.slice - User Application Slice. Apr 30 03:50:47.531929 systemd[1939]: Reached target paths.target - Paths. Apr 30 03:50:47.531937 systemd[1939]: Reached target timers.target - Timers. Apr 30 03:50:47.532625 systemd[1939]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:50:47.538436 systemd[1939]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:50:47.538464 systemd[1939]: Reached target sockets.target - Sockets. Apr 30 03:50:47.538474 systemd[1939]: Reached target basic.target - Basic System. Apr 30 03:50:47.538495 systemd[1939]: Reached target default.target - Main User Target. Apr 30 03:50:47.538511 systemd[1939]: Startup finished in 102ms. Apr 30 03:50:47.538637 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:50:47.543484 coreos-metadata[1778]: Apr 30 03:50:47.543 INFO Fetch successful Apr 30 03:50:47.563527 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:50:47.593930 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:50:47.605732 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Apr 30 03:50:47.644411 systemd[1]: Started sshd@1-147.75.90.203:22-139.178.68.195:59840.service - OpenSSH per-connection server daemon (139.178.68.195:59840). Apr 30 03:50:47.680039 sshd[1957]: Accepted publickey for core from 139.178.68.195 port 59840 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:47.680781 sshd[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:47.683021 systemd-logind[1804]: New session 2 of user core. Apr 30 03:50:47.698483 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:50:47.757692 sshd[1957]: pam_unix(sshd:session): session closed for user core Apr 30 03:50:47.766530 systemd[1]: sshd@1-147.75.90.203:22-139.178.68.195:59840.service: Deactivated successfully. Apr 30 03:50:47.767306 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:50:47.767952 systemd-logind[1804]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:50:47.768709 systemd[1]: Started sshd@2-147.75.90.203:22-139.178.68.195:59850.service - OpenSSH per-connection server daemon (139.178.68.195:59850). Apr 30 03:50:47.788806 systemd-logind[1804]: Removed session 2. Apr 30 03:50:47.812889 sshd[1964]: Accepted publickey for core from 139.178.68.195 port 59850 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:47.813803 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:47.816926 systemd-logind[1804]: New session 3 of user core. Apr 30 03:50:47.834539 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:50:47.892342 sshd[1964]: pam_unix(sshd:session): session closed for user core Apr 30 03:50:47.893669 systemd[1]: sshd@2-147.75.90.203:22-139.178.68.195:59850.service: Deactivated successfully. Apr 30 03:50:47.894527 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:50:47.895204 systemd-logind[1804]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:50:47.895787 systemd-logind[1804]: Removed session 3. Apr 30 03:50:48.013977 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Apr 30 03:50:48.034529 coreos-metadata[1871]: Apr 30 03:50:48.034 INFO Fetch successful Apr 30 03:50:48.066407 unknown[1871]: wrote ssh authorized keys file for user: core Apr 30 03:50:48.092880 update-ssh-keys[1972]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:50:48.093371 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:50:48.105340 systemd[1]: Finished sshkeys.service. Apr 30 03:50:48.113878 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:50:48.123549 systemd[1]: Startup finished in 2.666s (kernel) + 23.855s (initrd) + 10.422s (userspace) = 36.944s. Apr 30 03:50:48.166588 login[1876]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:50:48.166877 login[1883]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 03:50:48.169076 systemd-logind[1804]: New session 5 of user core. Apr 30 03:50:48.169692 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:50:48.170974 systemd-logind[1804]: New session 4 of user core. Apr 30 03:50:48.171515 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:50:56.673675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:50:56.691604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:50:56.916898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:50:56.919114 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:50:56.941681 kubelet[2010]: E0430 03:50:56.941576 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:50:56.943814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:50:56.943908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:50:58.147608 systemd[1]: Started sshd@3-147.75.90.203:22-139.178.68.195:47276.service - OpenSSH per-connection server daemon (139.178.68.195:47276). Apr 30 03:50:58.172781 sshd[2027]: Accepted publickey for core from 139.178.68.195 port 47276 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:58.173417 sshd[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:58.175911 systemd-logind[1804]: New session 6 of user core. Apr 30 03:50:58.189615 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:50:58.243493 sshd[2027]: pam_unix(sshd:session): session closed for user core Apr 30 03:50:58.252937 systemd[1]: sshd@3-147.75.90.203:22-139.178.68.195:47276.service: Deactivated successfully. Apr 30 03:50:58.253669 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:50:58.254346 systemd-logind[1804]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:50:58.255045 systemd[1]: Started sshd@4-147.75.90.203:22-139.178.68.195:47286.service - OpenSSH per-connection server daemon (139.178.68.195:47286). Apr 30 03:50:58.255612 systemd-logind[1804]: Removed session 6. Apr 30 03:50:58.289470 sshd[2034]: Accepted publickey for core from 139.178.68.195 port 47286 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:58.290159 sshd[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:58.292874 systemd-logind[1804]: New session 7 of user core. Apr 30 03:50:58.302587 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:50:58.351061 sshd[2034]: pam_unix(sshd:session): session closed for user core Apr 30 03:50:58.376012 systemd[1]: sshd@4-147.75.90.203:22-139.178.68.195:47286.service: Deactivated successfully. Apr 30 03:50:58.376724 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:50:58.377400 systemd-logind[1804]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:50:58.378036 systemd[1]: Started sshd@5-147.75.90.203:22-139.178.68.195:47292.service - OpenSSH per-connection server daemon (139.178.68.195:47292). Apr 30 03:50:58.378542 systemd-logind[1804]: Removed session 7. Apr 30 03:50:58.419415 sshd[2041]: Accepted publickey for core from 139.178.68.195 port 47292 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:58.420180 sshd[2041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:58.423196 systemd-logind[1804]: New session 8 of user core. Apr 30 03:50:58.437577 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:50:58.502775 sshd[2041]: pam_unix(sshd:session): session closed for user core Apr 30 03:50:58.520133 systemd[1]: sshd@5-147.75.90.203:22-139.178.68.195:47292.service: Deactivated successfully. Apr 30 03:50:58.523759 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:50:58.527225 systemd-logind[1804]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:50:58.539197 systemd[1]: Started sshd@6-147.75.90.203:22-139.178.68.195:47304.service - OpenSSH per-connection server daemon (139.178.68.195:47304). Apr 30 03:50:58.542282 systemd-logind[1804]: Removed session 8. Apr 30 03:50:58.619939 sshd[2048]: Accepted publickey for core from 139.178.68.195 port 47304 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:58.621109 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:58.625193 systemd-logind[1804]: New session 9 of user core. Apr 30 03:50:58.634579 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:50:58.700046 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:50:58.700195 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:50:58.711981 sudo[2052]: pam_unix(sudo:session): session closed for user root Apr 30 03:50:58.712979 sshd[2048]: pam_unix(sshd:session): session closed for user core Apr 30 03:50:58.725243 systemd[1]: sshd@6-147.75.90.203:22-139.178.68.195:47304.service: Deactivated successfully. Apr 30 03:50:58.726186 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:50:58.727085 systemd-logind[1804]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:50:58.727907 systemd[1]: Started sshd@7-147.75.90.203:22-139.178.68.195:47306.service - OpenSSH per-connection server daemon (139.178.68.195:47306). Apr 30 03:50:58.728536 systemd-logind[1804]: Removed session 9. Apr 30 03:50:58.775188 sshd[2057]: Accepted publickey for core from 139.178.68.195 port 47306 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:58.776253 sshd[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:58.779963 systemd-logind[1804]: New session 10 of user core. Apr 30 03:50:58.789583 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:50:58.848782 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:50:58.848930 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:50:58.850986 sudo[2061]: pam_unix(sudo:session): session closed for user root Apr 30 03:50:58.853579 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:50:58.853730 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:50:58.873649 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:50:58.874788 auditctl[2064]: No rules Apr 30 03:50:58.875004 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:50:58.875121 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:50:58.876799 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:50:58.893051 augenrules[2082]: No rules Apr 30 03:50:58.893714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:50:58.894320 sudo[2060]: pam_unix(sudo:session): session closed for user root Apr 30 03:50:58.895201 sshd[2057]: pam_unix(sshd:session): session closed for user core Apr 30 03:50:58.897241 systemd[1]: sshd@7-147.75.90.203:22-139.178.68.195:47306.service: Deactivated successfully. Apr 30 03:50:58.898009 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:50:58.898411 systemd-logind[1804]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:50:58.899320 systemd[1]: Started sshd@8-147.75.90.203:22-139.178.68.195:47312.service - OpenSSH per-connection server daemon (139.178.68.195:47312). Apr 30 03:50:58.899915 systemd-logind[1804]: Removed session 10. Apr 30 03:50:58.935801 sshd[2090]: Accepted publickey for core from 139.178.68.195 port 47312 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:50:58.936562 sshd[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:50:58.939635 systemd-logind[1804]: New session 11 of user core. Apr 30 03:50:58.950562 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:50:59.006297 sudo[2093]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:50:59.007071 sudo[2093]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:50:59.406181 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:50:59.406546 (dockerd)[2120]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:50:59.759245 dockerd[2120]: time="2025-04-30T03:50:59.759139579Z" level=info msg="Starting up" Apr 30 03:50:59.828289 dockerd[2120]: time="2025-04-30T03:50:59.828233279Z" level=info msg="Loading containers: start." Apr 30 03:50:59.917361 kernel: Initializing XFRM netlink socket Apr 30 03:50:59.979217 systemd-networkd[1612]: docker0: Link UP Apr 30 03:51:00.000255 dockerd[2120]: time="2025-04-30T03:51:00.000234275Z" level=info msg="Loading containers: done." Apr 30 03:51:00.008165 dockerd[2120]: time="2025-04-30T03:51:00.008116937Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:51:00.008236 dockerd[2120]: time="2025-04-30T03:51:00.008167684Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:51:00.008236 dockerd[2120]: time="2025-04-30T03:51:00.008219489Z" level=info msg="Daemon has completed initialization" Apr 30 03:51:00.008634 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1350399620-merged.mount: Deactivated successfully. Apr 30 03:51:00.022866 dockerd[2120]: time="2025-04-30T03:51:00.022784011Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:51:00.022885 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:51:00.824735 containerd[1822]: time="2025-04-30T03:51:00.824598708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 03:51:01.427753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732587935.mount: Deactivated successfully. Apr 30 03:51:02.686900 containerd[1822]: time="2025-04-30T03:51:02.686848419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:02.687106 containerd[1822]: time="2025-04-30T03:51:02.687018074Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Apr 30 03:51:02.687564 containerd[1822]: time="2025-04-30T03:51:02.687523682Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:02.689150 containerd[1822]: time="2025-04-30T03:51:02.689105394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:02.689782 containerd[1822]: time="2025-04-30T03:51:02.689740574Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.865057108s" Apr 30 03:51:02.689782 containerd[1822]: time="2025-04-30T03:51:02.689757918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 03:51:02.690072 containerd[1822]: time="2025-04-30T03:51:02.690060727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 03:51:04.204475 containerd[1822]: time="2025-04-30T03:51:04.204421742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:04.204687 containerd[1822]: time="2025-04-30T03:51:04.204614283Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Apr 30 03:51:04.205052 containerd[1822]: time="2025-04-30T03:51:04.205010463Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:04.206580 containerd[1822]: time="2025-04-30T03:51:04.206537797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:04.207163 containerd[1822]: time="2025-04-30T03:51:04.207123351Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.517047614s" Apr 30 03:51:04.207163 containerd[1822]: time="2025-04-30T03:51:04.207137176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 03:51:04.207409 containerd[1822]: time="2025-04-30T03:51:04.207366708Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 03:51:05.403514 containerd[1822]: time="2025-04-30T03:51:05.403488217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:05.403716 containerd[1822]: time="2025-04-30T03:51:05.403680754Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Apr 30 03:51:05.404075 containerd[1822]: time="2025-04-30T03:51:05.404063812Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:05.405880 containerd[1822]: time="2025-04-30T03:51:05.405839121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:05.406373 containerd[1822]: time="2025-04-30T03:51:05.406327538Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.198935214s" Apr 30 03:51:05.406373 containerd[1822]: time="2025-04-30T03:51:05.406347960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 03:51:05.406608 containerd[1822]: time="2025-04-30T03:51:05.406595257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 03:51:06.250885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947155435.mount: Deactivated successfully. Apr 30 03:51:06.451885 containerd[1822]: time="2025-04-30T03:51:06.451827904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:06.452111 containerd[1822]: time="2025-04-30T03:51:06.451951175Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 03:51:06.452395 containerd[1822]: time="2025-04-30T03:51:06.452345284Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:06.453254 containerd[1822]: time="2025-04-30T03:51:06.453242609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:06.453710 containerd[1822]: time="2025-04-30T03:51:06.453667662Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.04705494s" Apr 30 03:51:06.453710 containerd[1822]: time="2025-04-30T03:51:06.453685406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 03:51:06.453957 containerd[1822]: time="2025-04-30T03:51:06.453912579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 03:51:06.950510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987261548.mount: Deactivated successfully. Apr 30 03:51:06.951060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:51:06.967542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:51:07.199389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:51:07.202377 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:51:07.223555 kubelet[2377]: E0430 03:51:07.223514 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:51:07.224983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:51:07.225078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:51:07.690223 containerd[1822]: time="2025-04-30T03:51:07.690174566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:07.690549 containerd[1822]: time="2025-04-30T03:51:07.690497810Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Apr 30 03:51:07.690878 containerd[1822]: time="2025-04-30T03:51:07.690831912Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:07.692997 containerd[1822]: time="2025-04-30T03:51:07.692955386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:07.693602 containerd[1822]: time="2025-04-30T03:51:07.693560285Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.239633443s" Apr 30 03:51:07.693602 containerd[1822]: time="2025-04-30T03:51:07.693577073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 03:51:07.693879 containerd[1822]: time="2025-04-30T03:51:07.693828227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:51:08.165179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654502698.mount: Deactivated successfully. Apr 30 03:51:08.166747 containerd[1822]: time="2025-04-30T03:51:08.166689162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:08.166935 containerd[1822]: time="2025-04-30T03:51:08.166885562Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 03:51:08.167344 containerd[1822]: time="2025-04-30T03:51:08.167332946Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:08.168545 containerd[1822]: time="2025-04-30T03:51:08.168501800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:08.169000 containerd[1822]: time="2025-04-30T03:51:08.168959605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 475.099735ms" Apr 30 03:51:08.169000 containerd[1822]: time="2025-04-30T03:51:08.168974012Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:51:08.169281 containerd[1822]: time="2025-04-30T03:51:08.169272112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 03:51:08.692395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832990701.mount: Deactivated successfully. Apr 30 03:51:09.869946 containerd[1822]: time="2025-04-30T03:51:09.869881772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:09.870151 containerd[1822]: time="2025-04-30T03:51:09.870089586Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Apr 30 03:51:09.870626 containerd[1822]: time="2025-04-30T03:51:09.870585486Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:09.872419 containerd[1822]: time="2025-04-30T03:51:09.872377462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:09.873180 containerd[1822]: time="2025-04-30T03:51:09.873139955Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.703853439s" Apr 30 03:51:09.873180 containerd[1822]: time="2025-04-30T03:51:09.873155236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 03:51:11.931198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:51:11.953687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:51:11.965937 systemd[1]: Reloading requested from client PID 2546 ('systemctl') (unit session-11.scope)... Apr 30 03:51:11.965944 systemd[1]: Reloading... Apr 30 03:51:12.023329 zram_generator::config[2585]: No configuration found. Apr 30 03:51:12.090280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:51:12.151137 systemd[1]: Reloading finished in 184 ms. Apr 30 03:51:12.182664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:51:12.183873 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:51:12.184927 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:51:12.185026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:51:12.185869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:51:12.418411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:51:12.421790 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:51:12.444700 kubelet[2654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:51:12.444700 kubelet[2654]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:51:12.444700 kubelet[2654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:51:12.444700 kubelet[2654]: I0430 03:51:12.444669 2654 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:51:12.829532 kubelet[2654]: I0430 03:51:12.829456 2654 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:51:12.829532 kubelet[2654]: I0430 03:51:12.829470 2654 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:51:12.829672 kubelet[2654]: I0430 03:51:12.829614 2654 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:51:12.849901 kubelet[2654]: E0430 03:51:12.849858 2654 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.90.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.90.203:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:51:12.850611 kubelet[2654]: I0430 03:51:12.850580 2654 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:51:12.856856 kubelet[2654]: E0430 03:51:12.856816 2654 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:51:12.856856 kubelet[2654]: I0430 03:51:12.856830 2654 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:51:12.865917 kubelet[2654]: I0430 03:51:12.865907 2654 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:51:12.867092 kubelet[2654]: I0430 03:51:12.867050 2654 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:51:12.867205 kubelet[2654]: I0430 03:51:12.867067 2654 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-1bdc449bef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:51:12.867205 kubelet[2654]: I0430 03:51:12.867202 2654 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:51:12.867311 kubelet[2654]: I0430 03:51:12.867208 2654 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:51:12.867311 kubelet[2654]: I0430 03:51:12.867270 2654 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:51:12.870739 kubelet[2654]: I0430 03:51:12.870688 2654 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:51:12.870739 kubelet[2654]: I0430 03:51:12.870712 2654 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:51:12.870739 kubelet[2654]: I0430 03:51:12.870721 2654 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:51:12.870739 kubelet[2654]: I0430 03:51:12.870726 2654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:51:12.873311 kubelet[2654]: I0430 03:51:12.873283 2654 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:51:12.873573 kubelet[2654]: I0430 03:51:12.873565 2654 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:51:12.874299 kubelet[2654]: W0430 03:51:12.874291 2654 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:51:12.875949 kubelet[2654]: W0430 03:51:12.875921 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.90.203:6443: connect: connection refused Apr 30 03:51:12.875980 kubelet[2654]: E0430 03:51:12.875965 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.90.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.90.203:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:51:12.876469 kubelet[2654]: I0430 03:51:12.876462 2654 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:51:12.876496 kubelet[2654]: I0430 03:51:12.876481 2654 server.go:1287] "Started kubelet" Apr 30 03:51:12.876576 kubelet[2654]: I0430 03:51:12.876560 2654 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:51:12.876616 kubelet[2654]: I0430 03:51:12.876557 2654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:51:12.877028 kubelet[2654]: I0430 03:51:12.877014 2654 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:51:12.877126 kubelet[2654]: W0430 03:51:12.877069 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-1bdc449bef&limit=500&resourceVersion=0": dial tcp 147.75.90.203:6443: connect: connection refused Apr 30 03:51:12.877126 kubelet[2654]: E0430 03:51:12.877107 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.90.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.3-a-1bdc449bef&limit=500&resourceVersion=0\": dial tcp 147.75.90.203:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:51:12.879696 kubelet[2654]: I0430 03:51:12.879686 2654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:51:12.879740 kubelet[2654]: I0430 03:51:12.879729 2654 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:51:12.879779 kubelet[2654]: I0430 03:51:12.879740 2654 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:51:12.879779 kubelet[2654]: E0430 03:51:12.879759 2654 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-1bdc449bef\" not found" Apr 30 03:51:12.879833 kubelet[2654]: I0430 03:51:12.879782 2654 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:51:12.879864 kubelet[2654]: I0430 03:51:12.879848 2654 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:51:12.879893 kubelet[2654]: I0430 03:51:12.879885 2654 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:51:12.880102 kubelet[2654]: E0430 03:51:12.880067 2654 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:51:12.880377 kubelet[2654]: E0430 03:51:12.880266 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-1bdc449bef?timeout=10s\": dial tcp 147.75.90.203:6443: connect: connection refused" interval="200ms" Apr 30 03:51:12.880528 kubelet[2654]: W0430 03:51:12.880480 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.203:6443: connect: connection refused Apr 30 03:51:12.880564 kubelet[2654]: I0430 03:51:12.880543 2654 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:51:12.880564 kubelet[2654]: E0430 03:51:12.880543 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.90.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.90.203:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:51:12.880620 kubelet[2654]: I0430 03:51:12.880608 2654 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:51:12.882794 kubelet[2654]: E0430 03:51:12.880594 2654 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.203:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.3-a-1bdc449bef.183afc2c53efb610 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.3-a-1bdc449bef,UID:ci-4081.3.3-a-1bdc449bef,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.3-a-1bdc449bef,},FirstTimestamp:2025-04-30 03:51:12.876467728 +0000 UTC m=+0.452401771,LastTimestamp:2025-04-30 03:51:12.876467728 +0000 UTC m=+0.452401771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.3-a-1bdc449bef,}" Apr 30 03:51:12.883303 kubelet[2654]: I0430 03:51:12.883271 2654 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:51:12.890178 kubelet[2654]: I0430 03:51:12.890157 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:51:12.890706 kubelet[2654]: I0430 03:51:12.890695 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:51:12.890733 kubelet[2654]: I0430 03:51:12.890710 2654 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:51:12.890733 kubelet[2654]: I0430 03:51:12.890725 2654 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:51:12.890774 kubelet[2654]: I0430 03:51:12.890733 2654 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:51:12.890774 kubelet[2654]: E0430 03:51:12.890765 2654 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:51:12.891001 kubelet[2654]: W0430 03:51:12.890980 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.203:6443: connect: connection refused Apr 30 03:51:12.891026 kubelet[2654]: E0430 03:51:12.891011 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.90.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.90.203:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:51:12.981107 kubelet[2654]: E0430 03:51:12.981025 2654 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-1bdc449bef\" not found" Apr 30 03:51:12.991870 kubelet[2654]: E0430 03:51:12.991790 2654 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:51:13.039038 kubelet[2654]: I0430 03:51:13.038942 2654 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:51:13.039038 kubelet[2654]: I0430 03:51:13.038993 2654 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:51:13.039038 kubelet[2654]: I0430 03:51:13.039049 2654 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:51:13.041234 kubelet[2654]: I0430 03:51:13.041189 2654 policy_none.go:49] "None policy: Start" Apr 30 03:51:13.041234 kubelet[2654]: I0430 03:51:13.041208 2654 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:51:13.041234 kubelet[2654]: I0430 03:51:13.041221 2654 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:51:13.045074 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:51:13.055847 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:51:13.057689 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:51:13.072988 kubelet[2654]: I0430 03:51:13.072946 2654 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:51:13.073081 kubelet[2654]: I0430 03:51:13.073064 2654 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:51:13.073128 kubelet[2654]: I0430 03:51:13.073073 2654 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:51:13.073192 kubelet[2654]: I0430 03:51:13.073182 2654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:51:13.073561 kubelet[2654]: E0430 03:51:13.073549 2654 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:51:13.073613 kubelet[2654]: E0430 03:51:13.073578 2654 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.3-a-1bdc449bef\" not found" Apr 30 03:51:13.081296 kubelet[2654]: E0430 03:51:13.081248 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-1bdc449bef?timeout=10s\": dial tcp 147.75.90.203:6443: connect: connection refused" interval="400ms" Apr 30 03:51:13.178026 kubelet[2654]: I0430 03:51:13.177963 2654 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.178865 kubelet[2654]: E0430 03:51:13.178742 2654 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.90.203:6443/api/v1/nodes\": dial tcp 147.75.90.203:6443: connect: connection refused" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.217671 systemd[1]: Created slice kubepods-burstable-podd77eb7015db4d67e3612e5ee0ba5d8bf.slice - libcontainer container kubepods-burstable-podd77eb7015db4d67e3612e5ee0ba5d8bf.slice. Apr 30 03:51:13.236243 kubelet[2654]: E0430 03:51:13.236182 2654 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-1bdc449bef\" not found" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.244814 systemd[1]: Created slice kubepods-burstable-pod11c68f4ac14fbb295d51bd5fa0b693ea.slice - libcontainer container kubepods-burstable-pod11c68f4ac14fbb295d51bd5fa0b693ea.slice. Apr 30 03:51:13.249346 kubelet[2654]: E0430 03:51:13.249282 2654 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-1bdc449bef\" not found" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.253885 systemd[1]: Created slice kubepods-burstable-pod7e0f20eeadac0571e238f941cccba07f.slice - libcontainer container kubepods-burstable-pod7e0f20eeadac0571e238f941cccba07f.slice. Apr 30 03:51:13.257753 kubelet[2654]: E0430 03:51:13.257671 2654 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.3-a-1bdc449bef\" not found" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281497 kubelet[2654]: I0430 03:51:13.281368 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e0f20eeadac0571e238f941cccba07f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-1bdc449bef\" (UID: \"7e0f20eeadac0571e238f941cccba07f\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281497 kubelet[2654]: I0430 03:51:13.281465 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d77eb7015db4d67e3612e5ee0ba5d8bf-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" (UID: \"d77eb7015db4d67e3612e5ee0ba5d8bf\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281497 kubelet[2654]: I0430 03:51:13.281515 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d77eb7015db4d67e3612e5ee0ba5d8bf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" (UID: \"d77eb7015db4d67e3612e5ee0ba5d8bf\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281934 kubelet[2654]: I0430 03:51:13.281562 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d77eb7015db4d67e3612e5ee0ba5d8bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" (UID: \"d77eb7015db4d67e3612e5ee0ba5d8bf\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281934 kubelet[2654]: I0430 03:51:13.281661 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281934 kubelet[2654]: I0430 03:51:13.281742 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281934 kubelet[2654]: I0430 03:51:13.281799 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.281934 kubelet[2654]: I0430 03:51:13.281863 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.282359 kubelet[2654]: I0430 03:51:13.281905 2654 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.384126 kubelet[2654]: I0430 03:51:13.383917 2654 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.384796 kubelet[2654]: E0430 03:51:13.384683 2654 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.90.203:6443/api/v1/nodes\": dial tcp 147.75.90.203:6443: connect: connection refused" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.483118 kubelet[2654]: E0430 03:51:13.482985 2654 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.3-a-1bdc449bef?timeout=10s\": dial tcp 147.75.90.203:6443: connect: connection refused" interval="800ms" Apr 30 03:51:13.539288 containerd[1822]: time="2025-04-30T03:51:13.539139281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-1bdc449bef,Uid:d77eb7015db4d67e3612e5ee0ba5d8bf,Namespace:kube-system,Attempt:0,}" Apr 30 03:51:13.550643 containerd[1822]: time="2025-04-30T03:51:13.550630620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-1bdc449bef,Uid:11c68f4ac14fbb295d51bd5fa0b693ea,Namespace:kube-system,Attempt:0,}" Apr 30 03:51:13.559074 containerd[1822]: time="2025-04-30T03:51:13.559061633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-1bdc449bef,Uid:7e0f20eeadac0571e238f941cccba07f,Namespace:kube-system,Attempt:0,}" Apr 30 03:51:13.786759 kubelet[2654]: I0430 03:51:13.786743 2654 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.786997 kubelet[2654]: E0430 03:51:13.786983 2654 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.90.203:6443/api/v1/nodes\": dial tcp 147.75.90.203:6443: connect: connection refused" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:13.975396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920678599.mount: Deactivated successfully. Apr 30 03:51:13.976413 containerd[1822]: time="2025-04-30T03:51:13.976313527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:51:13.976580 containerd[1822]: time="2025-04-30T03:51:13.976529774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:51:13.977230 containerd[1822]: time="2025-04-30T03:51:13.977215420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:51:13.977952 containerd[1822]: time="2025-04-30T03:51:13.977902275Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:51:13.978151 containerd[1822]: time="2025-04-30T03:51:13.978108240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:51:13.978615 containerd[1822]: time="2025-04-30T03:51:13.978574749Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:51:13.978648 containerd[1822]: time="2025-04-30T03:51:13.978618350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:51:13.980106 containerd[1822]: time="2025-04-30T03:51:13.980063478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:51:13.981112 containerd[1822]: time="2025-04-30T03:51:13.981071320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 430.413437ms" Apr 30 03:51:13.981957 containerd[1822]: time="2025-04-30T03:51:13.981912469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.594613ms" Apr 30 03:51:13.983105 containerd[1822]: time="2025-04-30T03:51:13.983056014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 423.968651ms" Apr 30 03:51:14.074495 containerd[1822]: time="2025-04-30T03:51:14.074404727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:14.074495 containerd[1822]: time="2025-04-30T03:51:14.074438307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:14.074495 containerd[1822]: time="2025-04-30T03:51:14.074448273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:14.074633 containerd[1822]: time="2025-04-30T03:51:14.074385823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:14.074633 containerd[1822]: time="2025-04-30T03:51:14.074562215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:14.074633 containerd[1822]: time="2025-04-30T03:51:14.074590129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:14.074633 containerd[1822]: time="2025-04-30T03:51:14.074595887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:14.074633 containerd[1822]: time="2025-04-30T03:51:14.074597975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:14.074633 containerd[1822]: time="2025-04-30T03:51:14.074607000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:14.074740 containerd[1822]: time="2025-04-30T03:51:14.074639775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:14.074740 containerd[1822]: time="2025-04-30T03:51:14.074660129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:14.074740 containerd[1822]: time="2025-04-30T03:51:14.074673454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:14.092662 systemd[1]: Started cri-containerd-2c6b7400dddb008787a05815a726c970f485afb1965b5f6839db5f93203202f4.scope - libcontainer container 2c6b7400dddb008787a05815a726c970f485afb1965b5f6839db5f93203202f4. Apr 30 03:51:14.093464 systemd[1]: Started cri-containerd-3a643ee5089d518a0f4b4195a4745a1fb935bac87bcaa82f6379f6fb84c27d9b.scope - libcontainer container 3a643ee5089d518a0f4b4195a4745a1fb935bac87bcaa82f6379f6fb84c27d9b. Apr 30 03:51:14.094138 systemd[1]: Started cri-containerd-3e4b3d952cb6099a43352489750bb90f120a54228e16310e1fcc1b7db76e0cbf.scope - libcontainer container 3e4b3d952cb6099a43352489750bb90f120a54228e16310e1fcc1b7db76e0cbf. Apr 30 03:51:14.119577 containerd[1822]: time="2025-04-30T03:51:14.119548443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.3-a-1bdc449bef,Uid:7e0f20eeadac0571e238f941cccba07f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a643ee5089d518a0f4b4195a4745a1fb935bac87bcaa82f6379f6fb84c27d9b\"" Apr 30 03:51:14.121380 containerd[1822]: time="2025-04-30T03:51:14.121352797Z" level=info msg="CreateContainer within sandbox \"3a643ee5089d518a0f4b4195a4745a1fb935bac87bcaa82f6379f6fb84c27d9b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:51:14.124205 containerd[1822]: time="2025-04-30T03:51:14.124183891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.3-a-1bdc449bef,Uid:11c68f4ac14fbb295d51bd5fa0b693ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c6b7400dddb008787a05815a726c970f485afb1965b5f6839db5f93203202f4\"" Apr 30 03:51:14.125396 containerd[1822]: time="2025-04-30T03:51:14.125380668Z" level=info msg="CreateContainer within sandbox \"2c6b7400dddb008787a05815a726c970f485afb1965b5f6839db5f93203202f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:51:14.125744 containerd[1822]: time="2025-04-30T03:51:14.125724330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.3-a-1bdc449bef,Uid:d77eb7015db4d67e3612e5ee0ba5d8bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e4b3d952cb6099a43352489750bb90f120a54228e16310e1fcc1b7db76e0cbf\"" Apr 30 03:51:14.126789 containerd[1822]: time="2025-04-30T03:51:14.126775717Z" level=info msg="CreateContainer within sandbox \"3e4b3d952cb6099a43352489750bb90f120a54228e16310e1fcc1b7db76e0cbf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:51:14.145695 containerd[1822]: time="2025-04-30T03:51:14.145649852Z" level=info msg="CreateContainer within sandbox \"3a643ee5089d518a0f4b4195a4745a1fb935bac87bcaa82f6379f6fb84c27d9b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cbb0b0d793311a35dd307221e8a01e55839cba736c676f4de8457ac26be8a77b\"" Apr 30 03:51:14.146026 containerd[1822]: time="2025-04-30T03:51:14.145983919Z" level=info msg="StartContainer for \"cbb0b0d793311a35dd307221e8a01e55839cba736c676f4de8457ac26be8a77b\"" Apr 30 03:51:14.147146 containerd[1822]: time="2025-04-30T03:51:14.147132276Z" level=info msg="CreateContainer within sandbox \"3e4b3d952cb6099a43352489750bb90f120a54228e16310e1fcc1b7db76e0cbf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9395349f12ba1eeb2c23de050b004ae6458c6293374675318737c5684723e4dd\"" Apr 30 03:51:14.147289 containerd[1822]: time="2025-04-30T03:51:14.147279168Z" level=info msg="StartContainer for \"9395349f12ba1eeb2c23de050b004ae6458c6293374675318737c5684723e4dd\"" Apr 30 03:51:14.147479 containerd[1822]: time="2025-04-30T03:51:14.147464453Z" level=info msg="CreateContainer within sandbox \"2c6b7400dddb008787a05815a726c970f485afb1965b5f6839db5f93203202f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7598fba9aedb8f7abfc0250cc010df3750a1b9659206f82d3f02197dad4477e7\"" Apr 30 03:51:14.147693 containerd[1822]: time="2025-04-30T03:51:14.147674954Z" level=info msg="StartContainer for \"7598fba9aedb8f7abfc0250cc010df3750a1b9659206f82d3f02197dad4477e7\"" Apr 30 03:51:14.163671 kubelet[2654]: W0430 03:51:14.163506 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.90.203:6443: connect: connection refused Apr 30 03:51:14.163671 kubelet[2654]: E0430 03:51:14.163662 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.90.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.90.203:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:51:14.168571 systemd[1]: Started cri-containerd-cbb0b0d793311a35dd307221e8a01e55839cba736c676f4de8457ac26be8a77b.scope - libcontainer container cbb0b0d793311a35dd307221e8a01e55839cba736c676f4de8457ac26be8a77b. Apr 30 03:51:14.171220 systemd[1]: Started cri-containerd-7598fba9aedb8f7abfc0250cc010df3750a1b9659206f82d3f02197dad4477e7.scope - libcontainer container 7598fba9aedb8f7abfc0250cc010df3750a1b9659206f82d3f02197dad4477e7. Apr 30 03:51:14.171971 systemd[1]: Started cri-containerd-9395349f12ba1eeb2c23de050b004ae6458c6293374675318737c5684723e4dd.scope - libcontainer container 9395349f12ba1eeb2c23de050b004ae6458c6293374675318737c5684723e4dd. Apr 30 03:51:14.199029 containerd[1822]: time="2025-04-30T03:51:14.199001562Z" level=info msg="StartContainer for \"cbb0b0d793311a35dd307221e8a01e55839cba736c676f4de8457ac26be8a77b\" returns successfully" Apr 30 03:51:14.205066 containerd[1822]: time="2025-04-30T03:51:14.205039781Z" level=info msg="StartContainer for \"9395349f12ba1eeb2c23de050b004ae6458c6293374675318737c5684723e4dd\" returns successfully" Apr 30 03:51:14.205066 containerd[1822]: time="2025-04-30T03:51:14.205064161Z" level=info msg="StartContainer for \"7598fba9aedb8f7abfc0250cc010df3750a1b9659206f82d3f02197dad4477e7\" returns successfully" Apr 30 03:51:14.215630 kubelet[2654]: W0430 03:51:14.215598 2654 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.203:6443: connect: connection refused Apr 30 03:51:14.215703 kubelet[2654]: E0430 03:51:14.215636 2654 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.90.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.90.203:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:51:14.590560 kubelet[2654]: I0430 03:51:14.590538 2654 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.770183 kubelet[2654]: E0430 03:51:14.770161 2654 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.3-a-1bdc449bef\" not found" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.873272 kubelet[2654]: I0430 03:51:14.873094 2654 apiserver.go:52] "Watching apiserver" Apr 30 03:51:14.890304 kubelet[2654]: I0430 03:51:14.890266 2654 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.895112 kubelet[2654]: I0430 03:51:14.895081 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.896370 kubelet[2654]: I0430 03:51:14.896354 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.897580 kubelet[2654]: I0430 03:51:14.897565 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.899829 kubelet[2654]: E0430 03:51:14.899802 2654 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.899906 kubelet[2654]: E0430 03:51:14.899831 2654 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.899906 kubelet[2654]: E0430 03:51:14.899887 2654 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-a-1bdc449bef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.980531 kubelet[2654]: I0430 03:51:14.980468 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.980626 kubelet[2654]: I0430 03:51:14.980574 2654 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:51:14.982173 kubelet[2654]: E0430 03:51:14.982122 2654 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.982173 kubelet[2654]: I0430 03:51:14.982144 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.983791 kubelet[2654]: E0430 03:51:14.983765 2654 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.983791 kubelet[2654]: I0430 03:51:14.983790 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:14.985394 kubelet[2654]: E0430 03:51:14.985370 2654 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-a-1bdc449bef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:15.900893 kubelet[2654]: I0430 03:51:15.900788 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:15.902190 kubelet[2654]: I0430 03:51:15.900976 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:15.902190 kubelet[2654]: I0430 03:51:15.901407 2654 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:15.907849 kubelet[2654]: W0430 03:51:15.907790 2654 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:15.908092 kubelet[2654]: W0430 03:51:15.907916 2654 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:15.908092 kubelet[2654]: W0430 03:51:15.907951 2654 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:17.427475 systemd[1]: Reloading requested from client PID 2977 ('systemctl') (unit session-11.scope)... Apr 30 03:51:17.427509 systemd[1]: Reloading... Apr 30 03:51:17.506400 zram_generator::config[3016]: No configuration found. Apr 30 03:51:17.571126 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:51:17.639154 systemd[1]: Reloading finished in 210 ms. Apr 30 03:51:17.665222 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:51:17.671826 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:51:17.671934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:51:17.671956 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 135.5M memory peak, 0B memory swap peak. Apr 30 03:51:17.689467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:51:17.963621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:51:17.976635 (kubelet)[3080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:51:18.021686 kubelet[3080]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:51:18.021686 kubelet[3080]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:51:18.021686 kubelet[3080]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:51:18.021990 kubelet[3080]: I0430 03:51:18.021733 3080 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:51:18.027783 kubelet[3080]: I0430 03:51:18.027740 3080 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:51:18.027783 kubelet[3080]: I0430 03:51:18.027756 3080 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:51:18.027989 kubelet[3080]: I0430 03:51:18.027953 3080 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:51:18.029013 kubelet[3080]: I0430 03:51:18.028976 3080 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:51:18.030826 kubelet[3080]: I0430 03:51:18.030788 3080 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:51:18.033203 kubelet[3080]: E0430 03:51:18.033155 3080 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:51:18.033203 kubelet[3080]: I0430 03:51:18.033179 3080 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:51:18.043081 kubelet[3080]: I0430 03:51:18.043042 3080 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:51:18.043213 kubelet[3080]: I0430 03:51:18.043196 3080 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:51:18.043394 kubelet[3080]: I0430 03:51:18.043216 3080 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.3-a-1bdc449bef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:51:18.043394 kubelet[3080]: I0430 03:51:18.043373 3080 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:51:18.043394 kubelet[3080]: I0430 03:51:18.043383 3080 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:51:18.043550 kubelet[3080]: I0430 03:51:18.043420 3080 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:51:18.043581 kubelet[3080]: I0430 03:51:18.043571 3080 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:51:18.043608 kubelet[3080]: I0430 03:51:18.043586 3080 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:51:18.043608 kubelet[3080]: I0430 03:51:18.043599 3080 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:51:18.043663 kubelet[3080]: I0430 03:51:18.043610 3080 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:51:18.044154 kubelet[3080]: I0430 03:51:18.044139 3080 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:51:18.044513 kubelet[3080]: I0430 03:51:18.044503 3080 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:51:18.044900 kubelet[3080]: I0430 03:51:18.044862 3080 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:51:18.044900 kubelet[3080]: I0430 03:51:18.044886 3080 server.go:1287] "Started kubelet" Apr 30 03:51:18.045013 kubelet[3080]: I0430 03:51:18.044943 3080 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:51:18.045013 kubelet[3080]: I0430 03:51:18.044952 3080 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:51:18.045187 kubelet[3080]: I0430 03:51:18.045170 3080 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:51:18.045918 kubelet[3080]: I0430 03:51:18.045901 3080 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:51:18.045995 kubelet[3080]: I0430 03:51:18.045917 3080 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:51:18.045995 kubelet[3080]: E0430 03:51:18.045959 3080 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.3-a-1bdc449bef\" not found" Apr 30 03:51:18.045995 kubelet[3080]: I0430 03:51:18.045976 3080 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:51:18.045995 kubelet[3080]: I0430 03:51:18.045966 3080 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:51:18.046511 kubelet[3080]: I0430 03:51:18.046490 3080 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:51:18.047245 kubelet[3080]: E0430 03:51:18.047203 3080 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:51:18.047344 kubelet[3080]: I0430 03:51:18.047246 3080 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:51:18.047344 kubelet[3080]: I0430 03:51:18.047291 3080 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:51:18.047441 kubelet[3080]: I0430 03:51:18.047381 3080 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:51:18.048945 kubelet[3080]: I0430 03:51:18.048928 3080 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:51:18.053894 kubelet[3080]: I0430 03:51:18.053868 3080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:51:18.054765 kubelet[3080]: I0430 03:51:18.054750 3080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:51:18.054817 kubelet[3080]: I0430 03:51:18.054771 3080 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:51:18.054817 kubelet[3080]: I0430 03:51:18.054790 3080 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:51:18.054817 kubelet[3080]: I0430 03:51:18.054798 3080 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:51:18.054903 kubelet[3080]: E0430 03:51:18.054845 3080 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:51:18.071655 kubelet[3080]: I0430 03:51:18.071608 3080 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:51:18.071655 kubelet[3080]: I0430 03:51:18.071623 3080 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:51:18.071655 kubelet[3080]: I0430 03:51:18.071638 3080 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:51:18.071811 kubelet[3080]: I0430 03:51:18.071770 3080 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:51:18.071811 kubelet[3080]: I0430 03:51:18.071780 3080 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:51:18.071811 kubelet[3080]: I0430 03:51:18.071799 3080 policy_none.go:49] "None policy: Start" Apr 30 03:51:18.071811 kubelet[3080]: I0430 03:51:18.071806 3080 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:51:18.071913 kubelet[3080]: I0430 03:51:18.071815 3080 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:51:18.071913 kubelet[3080]: I0430 03:51:18.071905 3080 state_mem.go:75] "Updated machine memory state" Apr 30 03:51:18.074666 kubelet[3080]: I0430 03:51:18.074623 3080 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:51:18.074784 kubelet[3080]: I0430 03:51:18.074746 3080 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:51:18.074784 kubelet[3080]: I0430 03:51:18.074757 3080 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:51:18.074924 kubelet[3080]: I0430 03:51:18.074886 3080 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:51:18.075401 kubelet[3080]: E0430 03:51:18.075381 3080 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:51:18.156595 kubelet[3080]: I0430 03:51:18.156490 3080 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.156595 kubelet[3080]: I0430 03:51:18.156580 3080 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.157034 kubelet[3080]: I0430 03:51:18.156813 3080 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.164284 kubelet[3080]: W0430 03:51:18.164237 3080 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:18.164626 kubelet[3080]: W0430 03:51:18.164304 3080 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:18.164626 kubelet[3080]: W0430 03:51:18.164364 3080 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:18.164626 kubelet[3080]: E0430 03:51:18.164415 3080 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.164626 kubelet[3080]: E0430 03:51:18.164493 3080 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.164626 kubelet[3080]: E0430 03:51:18.164435 3080 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.3-a-1bdc449bef\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.182709 kubelet[3080]: I0430 03:51:18.182626 3080 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.192864 kubelet[3080]: I0430 03:51:18.192806 3080 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.193110 kubelet[3080]: I0430 03:51:18.192974 3080 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.247620 kubelet[3080]: I0430 03:51:18.247373 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-ca-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.247620 kubelet[3080]: I0430 03:51:18.247468 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.247620 kubelet[3080]: I0430 03:51:18.247586 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.248063 kubelet[3080]: I0430 03:51:18.247653 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e0f20eeadac0571e238f941cccba07f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.3-a-1bdc449bef\" (UID: \"7e0f20eeadac0571e238f941cccba07f\") " pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.248063 kubelet[3080]: I0430 03:51:18.247721 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.248063 kubelet[3080]: I0430 03:51:18.247773 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11c68f4ac14fbb295d51bd5fa0b693ea-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" (UID: \"11c68f4ac14fbb295d51bd5fa0b693ea\") " pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.248063 kubelet[3080]: I0430 03:51:18.247819 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d77eb7015db4d67e3612e5ee0ba5d8bf-ca-certs\") pod \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" (UID: \"d77eb7015db4d67e3612e5ee0ba5d8bf\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.248063 kubelet[3080]: I0430 03:51:18.247864 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d77eb7015db4d67e3612e5ee0ba5d8bf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" (UID: \"d77eb7015db4d67e3612e5ee0ba5d8bf\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:18.248572 kubelet[3080]: I0430 03:51:18.247914 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d77eb7015db4d67e3612e5ee0ba5d8bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" (UID: \"d77eb7015db4d67e3612e5ee0ba5d8bf\") " pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:19.043893 kubelet[3080]: I0430 03:51:19.043819 3080 apiserver.go:52] "Watching apiserver" Apr 30 03:51:19.062952 kubelet[3080]: I0430 03:51:19.062890 3080 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:19.063225 kubelet[3080]: I0430 03:51:19.062900 3080 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:19.070766 kubelet[3080]: W0430 03:51:19.070714 3080 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:19.070956 kubelet[3080]: E0430 03:51:19.070866 3080 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.3-a-1bdc449bef\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:19.071107 kubelet[3080]: W0430 03:51:19.070977 3080 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 03:51:19.071216 kubelet[3080]: E0430 03:51:19.071103 3080 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.3-a-1bdc449bef\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:19.111109 kubelet[3080]: I0430 03:51:19.111033 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.3-a-1bdc449bef" podStartSLOduration=4.111013846 podStartE2EDuration="4.111013846s" podCreationTimestamp="2025-04-30 03:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:51:19.104617069 +0000 UTC m=+1.118464644" watchObservedRunningTime="2025-04-30 03:51:19.111013846 +0000 UTC m=+1.124861426" Apr 30 03:51:19.111238 kubelet[3080]: I0430 03:51:19.111132 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.3-a-1bdc449bef" podStartSLOduration=4.1111255270000004 podStartE2EDuration="4.111125527s" podCreationTimestamp="2025-04-30 03:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:51:19.111086256 +0000 UTC m=+1.124933824" watchObservedRunningTime="2025-04-30 03:51:19.111125527 +0000 UTC m=+1.124973091" Apr 30 03:51:19.123425 kubelet[3080]: I0430 03:51:19.123382 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.3-a-1bdc449bef" podStartSLOduration=4.123362883 podStartE2EDuration="4.123362883s" podCreationTimestamp="2025-04-30 03:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:51:19.117541582 +0000 UTC m=+1.131389151" watchObservedRunningTime="2025-04-30 03:51:19.123362883 +0000 UTC m=+1.137210447" Apr 30 03:51:19.146864 kubelet[3080]: I0430 03:51:19.146826 3080 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:51:21.854618 kubelet[3080]: I0430 03:51:21.854562 3080 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:51:21.855620 containerd[1822]: time="2025-04-30T03:51:21.855195469Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:51:21.856211 kubelet[3080]: I0430 03:51:21.855695 3080 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:51:22.596527 systemd[1]: Created slice kubepods-besteffort-podb5666939_e8d9_4e0a_8e0b_d2b65b675c7d.slice - libcontainer container kubepods-besteffort-podb5666939_e8d9_4e0a_8e0b_d2b65b675c7d.slice. Apr 30 03:51:22.656973 sudo[2093]: pam_unix(sudo:session): session closed for user root Apr 30 03:51:22.657850 sshd[2090]: pam_unix(sshd:session): session closed for user core Apr 30 03:51:22.659462 systemd[1]: sshd@8-147.75.90.203:22-139.178.68.195:47312.service: Deactivated successfully. Apr 30 03:51:22.660273 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:51:22.660381 systemd[1]: session-11.scope: Consumed 3.776s CPU time, 171.8M memory peak, 0B memory swap peak. Apr 30 03:51:22.660978 systemd-logind[1804]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:51:22.661537 systemd-logind[1804]: Removed session 11. Apr 30 03:51:22.680676 kubelet[3080]: I0430 03:51:22.680628 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5666939-e8d9-4e0a-8e0b-d2b65b675c7d-kube-proxy\") pod \"kube-proxy-rzv8w\" (UID: \"b5666939-e8d9-4e0a-8e0b-d2b65b675c7d\") " pod="kube-system/kube-proxy-rzv8w" Apr 30 03:51:22.680676 kubelet[3080]: I0430 03:51:22.680661 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5666939-e8d9-4e0a-8e0b-d2b65b675c7d-xtables-lock\") pod \"kube-proxy-rzv8w\" (UID: \"b5666939-e8d9-4e0a-8e0b-d2b65b675c7d\") " pod="kube-system/kube-proxy-rzv8w" Apr 30 03:51:22.680676 kubelet[3080]: I0430 03:51:22.680676 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5666939-e8d9-4e0a-8e0b-d2b65b675c7d-lib-modules\") pod \"kube-proxy-rzv8w\" (UID: \"b5666939-e8d9-4e0a-8e0b-d2b65b675c7d\") " pod="kube-system/kube-proxy-rzv8w" Apr 30 03:51:22.680778 kubelet[3080]: I0430 03:51:22.680687 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84vp\" (UniqueName: \"kubernetes.io/projected/b5666939-e8d9-4e0a-8e0b-d2b65b675c7d-kube-api-access-r84vp\") pod \"kube-proxy-rzv8w\" (UID: \"b5666939-e8d9-4e0a-8e0b-d2b65b675c7d\") " pod="kube-system/kube-proxy-rzv8w" Apr 30 03:51:22.910782 containerd[1822]: time="2025-04-30T03:51:22.910542818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzv8w,Uid:b5666939-e8d9-4e0a-8e0b-d2b65b675c7d,Namespace:kube-system,Attempt:0,}" Apr 30 03:51:22.922476 containerd[1822]: time="2025-04-30T03:51:22.922389273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:22.922476 containerd[1822]: time="2025-04-30T03:51:22.922425943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:22.922476 containerd[1822]: time="2025-04-30T03:51:22.922433446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:22.922631 containerd[1822]: time="2025-04-30T03:51:22.922483387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:22.949638 systemd[1]: Started cri-containerd-04a50df1d4b997e882953d8a70d3bef5c81892b5a5b7f8db44350dbbd84b39ac.scope - libcontainer container 04a50df1d4b997e882953d8a70d3bef5c81892b5a5b7f8db44350dbbd84b39ac. Apr 30 03:51:22.953296 systemd[1]: Created slice kubepods-besteffort-podc3bff115_872a_4123_a57b_900a5c8e0038.slice - libcontainer container kubepods-besteffort-podc3bff115_872a_4123_a57b_900a5c8e0038.slice. Apr 30 03:51:22.963327 containerd[1822]: time="2025-04-30T03:51:22.962697338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzv8w,Uid:b5666939-e8d9-4e0a-8e0b-d2b65b675c7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"04a50df1d4b997e882953d8a70d3bef5c81892b5a5b7f8db44350dbbd84b39ac\"" Apr 30 03:51:22.965024 containerd[1822]: time="2025-04-30T03:51:22.965004349Z" level=info msg="CreateContainer within sandbox \"04a50df1d4b997e882953d8a70d3bef5c81892b5a5b7f8db44350dbbd84b39ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:51:22.972180 containerd[1822]: time="2025-04-30T03:51:22.972164543Z" level=info msg="CreateContainer within sandbox \"04a50df1d4b997e882953d8a70d3bef5c81892b5a5b7f8db44350dbbd84b39ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"450a74a9dabc0a9dc70de219495b4fcb20e22302f22508162bc4aa876ce0031e\"" Apr 30 03:51:22.972438 containerd[1822]: time="2025-04-30T03:51:22.972421334Z" level=info msg="StartContainer for \"450a74a9dabc0a9dc70de219495b4fcb20e22302f22508162bc4aa876ce0031e\"" Apr 30 03:51:22.982713 kubelet[3080]: I0430 03:51:22.982695 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkg44\" (UniqueName: \"kubernetes.io/projected/c3bff115-872a-4123-a57b-900a5c8e0038-kube-api-access-kkg44\") pod \"tigera-operator-789496d6f5-fv2bz\" (UID: \"c3bff115-872a-4123-a57b-900a5c8e0038\") " pod="tigera-operator/tigera-operator-789496d6f5-fv2bz" Apr 30 03:51:23.002585 kubelet[3080]: I0430 03:51:22.982718 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3bff115-872a-4123-a57b-900a5c8e0038-var-lib-calico\") pod \"tigera-operator-789496d6f5-fv2bz\" (UID: \"c3bff115-872a-4123-a57b-900a5c8e0038\") " pod="tigera-operator/tigera-operator-789496d6f5-fv2bz" Apr 30 03:51:23.002562 systemd[1]: Started cri-containerd-450a74a9dabc0a9dc70de219495b4fcb20e22302f22508162bc4aa876ce0031e.scope - libcontainer container 450a74a9dabc0a9dc70de219495b4fcb20e22302f22508162bc4aa876ce0031e. Apr 30 03:51:23.019280 containerd[1822]: time="2025-04-30T03:51:23.019252895Z" level=info msg="StartContainer for \"450a74a9dabc0a9dc70de219495b4fcb20e22302f22508162bc4aa876ce0031e\" returns successfully" Apr 30 03:51:23.256666 containerd[1822]: time="2025-04-30T03:51:23.256530421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-fv2bz,Uid:c3bff115-872a-4123-a57b-900a5c8e0038,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:51:23.266672 containerd[1822]: time="2025-04-30T03:51:23.266631112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:23.266672 containerd[1822]: time="2025-04-30T03:51:23.266663387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:23.266672 containerd[1822]: time="2025-04-30T03:51:23.266673792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:23.266993 containerd[1822]: time="2025-04-30T03:51:23.266760351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:23.279785 systemd[1]: Started cri-containerd-f902f68365e306221bf2d20cfb248ca0ddbb02a31a801dac2b71d2bba93b4944.scope - libcontainer container f902f68365e306221bf2d20cfb248ca0ddbb02a31a801dac2b71d2bba93b4944. Apr 30 03:51:23.354075 containerd[1822]: time="2025-04-30T03:51:23.354049137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-fv2bz,Uid:c3bff115-872a-4123-a57b-900a5c8e0038,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f902f68365e306221bf2d20cfb248ca0ddbb02a31a801dac2b71d2bba93b4944\"" Apr 30 03:51:23.355077 containerd[1822]: time="2025-04-30T03:51:23.355059570Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:51:25.626544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713301724.mount: Deactivated successfully. Apr 30 03:51:26.152637 containerd[1822]: time="2025-04-30T03:51:26.152614086Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:26.152863 containerd[1822]: time="2025-04-30T03:51:26.152837502Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:51:26.153204 containerd[1822]: time="2025-04-30T03:51:26.153193552Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:26.154321 containerd[1822]: time="2025-04-30T03:51:26.154269249Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:26.154712 containerd[1822]: time="2025-04-30T03:51:26.154669577Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.799589602s" Apr 30 03:51:26.154712 containerd[1822]: time="2025-04-30T03:51:26.154685483Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:51:26.155748 containerd[1822]: time="2025-04-30T03:51:26.155712019Z" level=info msg="CreateContainer within sandbox \"f902f68365e306221bf2d20cfb248ca0ddbb02a31a801dac2b71d2bba93b4944\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:51:26.159586 containerd[1822]: time="2025-04-30T03:51:26.159540639Z" level=info msg="CreateContainer within sandbox \"f902f68365e306221bf2d20cfb248ca0ddbb02a31a801dac2b71d2bba93b4944\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"790b515a25eb1274ac4a7f93413d5f01286f0e89ac37e1c39c2938e6d5b290a0\"" Apr 30 03:51:26.159768 containerd[1822]: time="2025-04-30T03:51:26.159718409Z" level=info msg="StartContainer for \"790b515a25eb1274ac4a7f93413d5f01286f0e89ac37e1c39c2938e6d5b290a0\"" Apr 30 03:51:26.179422 systemd[1]: Started cri-containerd-790b515a25eb1274ac4a7f93413d5f01286f0e89ac37e1c39c2938e6d5b290a0.scope - libcontainer container 790b515a25eb1274ac4a7f93413d5f01286f0e89ac37e1c39c2938e6d5b290a0. Apr 30 03:51:26.190346 containerd[1822]: time="2025-04-30T03:51:26.190294728Z" level=info msg="StartContainer for \"790b515a25eb1274ac4a7f93413d5f01286f0e89ac37e1c39c2938e6d5b290a0\" returns successfully" Apr 30 03:51:27.113271 kubelet[3080]: I0430 03:51:27.113154 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rzv8w" podStartSLOduration=5.113136434 podStartE2EDuration="5.113136434s" podCreationTimestamp="2025-04-30 03:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:51:23.079134012 +0000 UTC m=+5.092981591" watchObservedRunningTime="2025-04-30 03:51:27.113136434 +0000 UTC m=+9.126983991" Apr 30 03:51:27.113582 kubelet[3080]: I0430 03:51:27.113324 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-fv2bz" podStartSLOduration=2.3129201249999998 podStartE2EDuration="5.113296662s" podCreationTimestamp="2025-04-30 03:51:22 +0000 UTC" firstStartedPulling="2025-04-30 03:51:23.354759464 +0000 UTC m=+5.368607023" lastFinishedPulling="2025-04-30 03:51:26.155135999 +0000 UTC m=+8.168983560" observedRunningTime="2025-04-30 03:51:27.112884132 +0000 UTC m=+9.126731701" watchObservedRunningTime="2025-04-30 03:51:27.113296662 +0000 UTC m=+9.127144231" Apr 30 03:51:29.017949 systemd[1]: Created slice kubepods-besteffort-pod4456fe9f_e907_4b7f_bfe4_575a1d8f0d31.slice - libcontainer container kubepods-besteffort-pod4456fe9f_e907_4b7f_bfe4_575a1d8f0d31.slice. Apr 30 03:51:29.034475 systemd[1]: Created slice kubepods-besteffort-pod714bdd7b_5e5c_4205_ac4e_919b3d81d318.slice - libcontainer container kubepods-besteffort-pod714bdd7b_5e5c_4205_ac4e_919b3d81d318.slice. Apr 30 03:51:29.127675 kubelet[3080]: I0430 03:51:29.127545 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/714bdd7b-5e5c-4205-ac4e-919b3d81d318-node-certs\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.127675 kubelet[3080]: I0430 03:51:29.127665 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-cni-log-dir\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.128643 kubelet[3080]: I0430 03:51:29.127745 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4456fe9f-e907-4b7f-bfe4-575a1d8f0d31-typha-certs\") pod \"calico-typha-7c9b5cd8d6-nvqj9\" (UID: \"4456fe9f-e907-4b7f-bfe4-575a1d8f0d31\") " pod="calico-system/calico-typha-7c9b5cd8d6-nvqj9" Apr 30 03:51:29.128643 kubelet[3080]: I0430 03:51:29.127800 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-policysync\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.128643 kubelet[3080]: I0430 03:51:29.127850 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgcqs\" (UniqueName: \"kubernetes.io/projected/714bdd7b-5e5c-4205-ac4e-919b3d81d318-kube-api-access-qgcqs\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.128643 kubelet[3080]: I0430 03:51:29.127905 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/714bdd7b-5e5c-4205-ac4e-919b3d81d318-tigera-ca-bundle\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.128643 kubelet[3080]: I0430 03:51:29.128063 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-var-run-calico\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.129113 kubelet[3080]: I0430 03:51:29.128152 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-var-lib-calico\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.129113 kubelet[3080]: I0430 03:51:29.128262 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-cni-net-dir\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.129113 kubelet[3080]: I0430 03:51:29.128407 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4456fe9f-e907-4b7f-bfe4-575a1d8f0d31-tigera-ca-bundle\") pod \"calico-typha-7c9b5cd8d6-nvqj9\" (UID: \"4456fe9f-e907-4b7f-bfe4-575a1d8f0d31\") " pod="calico-system/calico-typha-7c9b5cd8d6-nvqj9" Apr 30 03:51:29.129113 kubelet[3080]: I0430 03:51:29.128478 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-flexvol-driver-host\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.129113 kubelet[3080]: I0430 03:51:29.128628 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkrs6\" (UniqueName: \"kubernetes.io/projected/4456fe9f-e907-4b7f-bfe4-575a1d8f0d31-kube-api-access-jkrs6\") pod \"calico-typha-7c9b5cd8d6-nvqj9\" (UID: \"4456fe9f-e907-4b7f-bfe4-575a1d8f0d31\") " pod="calico-system/calico-typha-7c9b5cd8d6-nvqj9" Apr 30 03:51:29.129602 kubelet[3080]: I0430 03:51:29.128739 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-lib-modules\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.129602 kubelet[3080]: I0430 03:51:29.128796 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-xtables-lock\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.129602 kubelet[3080]: I0430 03:51:29.128865 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/714bdd7b-5e5c-4205-ac4e-919b3d81d318-cni-bin-dir\") pod \"calico-node-tsssm\" (UID: \"714bdd7b-5e5c-4205-ac4e-919b3d81d318\") " pod="calico-system/calico-node-tsssm" Apr 30 03:51:29.167988 kubelet[3080]: E0430 03:51:29.167872 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9dtx" podUID="0dca7275-6863-404d-9bdb-986dfca9c849" Apr 30 03:51:29.182438 update_engine[1809]: I20250430 03:51:29.182349 1809 update_attempter.cc:509] Updating boot flags... Apr 30 03:51:29.220326 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3570) Apr 30 03:51:29.229030 kubelet[3080]: I0430 03:51:29.229005 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0dca7275-6863-404d-9bdb-986dfca9c849-varrun\") pod \"csi-node-driver-t9dtx\" (UID: \"0dca7275-6863-404d-9bdb-986dfca9c849\") " pod="calico-system/csi-node-driver-t9dtx" Apr 30 03:51:29.229137 kubelet[3080]: I0430 03:51:29.229035 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5s5\" (UniqueName: \"kubernetes.io/projected/0dca7275-6863-404d-9bdb-986dfca9c849-kube-api-access-bj5s5\") pod \"csi-node-driver-t9dtx\" (UID: \"0dca7275-6863-404d-9bdb-986dfca9c849\") " pod="calico-system/csi-node-driver-t9dtx" Apr 30 03:51:29.229173 kubelet[3080]: I0430 03:51:29.229143 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0dca7275-6863-404d-9bdb-986dfca9c849-registration-dir\") pod \"csi-node-driver-t9dtx\" (UID: \"0dca7275-6863-404d-9bdb-986dfca9c849\") " pod="calico-system/csi-node-driver-t9dtx" Apr 30 03:51:29.229240 kubelet[3080]: I0430 03:51:29.229228 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dca7275-6863-404d-9bdb-986dfca9c849-kubelet-dir\") pod \"csi-node-driver-t9dtx\" (UID: \"0dca7275-6863-404d-9bdb-986dfca9c849\") " pod="calico-system/csi-node-driver-t9dtx" Apr 30 03:51:29.229282 kubelet[3080]: I0430 03:51:29.229248 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0dca7275-6863-404d-9bdb-986dfca9c849-socket-dir\") pod \"csi-node-driver-t9dtx\" (UID: \"0dca7275-6863-404d-9bdb-986dfca9c849\") " pod="calico-system/csi-node-driver-t9dtx" Apr 30 03:51:29.229505 kubelet[3080]: E0430 03:51:29.229493 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.229505 kubelet[3080]: W0430 03:51:29.229503 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.229723 kubelet[3080]: E0430 03:51:29.229710 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.229846 kubelet[3080]: E0430 03:51:29.229838 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.229892 kubelet[3080]: W0430 03:51:29.229847 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.229892 kubelet[3080]: E0430 03:51:29.229858 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230021 kubelet[3080]: E0430 03:51:29.230013 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230021 kubelet[3080]: W0430 03:51:29.230020 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230086 kubelet[3080]: E0430 03:51:29.230029 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230124 kubelet[3080]: E0430 03:51:29.230118 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230124 kubelet[3080]: W0430 03:51:29.230123 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230191 kubelet[3080]: E0430 03:51:29.230129 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230228 kubelet[3080]: E0430 03:51:29.230206 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230228 kubelet[3080]: W0430 03:51:29.230210 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230228 kubelet[3080]: E0430 03:51:29.230216 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230346 kubelet[3080]: E0430 03:51:29.230327 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230346 kubelet[3080]: W0430 03:51:29.230332 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230346 kubelet[3080]: E0430 03:51:29.230339 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230508 kubelet[3080]: E0430 03:51:29.230498 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230508 kubelet[3080]: W0430 03:51:29.230506 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230572 kubelet[3080]: E0430 03:51:29.230515 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230684 kubelet[3080]: E0430 03:51:29.230675 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230684 kubelet[3080]: W0430 03:51:29.230683 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230752 kubelet[3080]: E0430 03:51:29.230702 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230799 kubelet[3080]: E0430 03:51:29.230792 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230828 kubelet[3080]: W0430 03:51:29.230798 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230828 kubelet[3080]: E0430 03:51:29.230812 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.230910 kubelet[3080]: E0430 03:51:29.230901 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.230910 kubelet[3080]: W0430 03:51:29.230909 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.230993 kubelet[3080]: E0430 03:51:29.230929 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.231028 kubelet[3080]: E0430 03:51:29.231017 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.231028 kubelet[3080]: W0430 03:51:29.231021 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.231086 kubelet[3080]: E0430 03:51:29.231036 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.231115 kubelet[3080]: E0430 03:51:29.231111 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.231147 kubelet[3080]: W0430 03:51:29.231116 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.231147 kubelet[3080]: E0430 03:51:29.231121 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.231217 kubelet[3080]: E0430 03:51:29.231211 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.231217 kubelet[3080]: W0430 03:51:29.231216 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.231279 kubelet[3080]: E0430 03:51:29.231220 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.231343 kubelet[3080]: E0430 03:51:29.231336 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.231343 kubelet[3080]: W0430 03:51:29.231341 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.231413 kubelet[3080]: E0430 03:51:29.231347 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.231549 kubelet[3080]: E0430 03:51:29.231540 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.231581 kubelet[3080]: W0430 03:51:29.231549 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.231581 kubelet[3080]: E0430 03:51:29.231558 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.231855 kubelet[3080]: E0430 03:51:29.231846 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.231855 kubelet[3080]: W0430 03:51:29.231853 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.231919 kubelet[3080]: E0430 03:51:29.231861 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.233865 kubelet[3080]: E0430 03:51:29.233828 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.233865 kubelet[3080]: W0430 03:51:29.233838 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.233865 kubelet[3080]: E0430 03:51:29.233851 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.235748 kubelet[3080]: E0430 03:51:29.235704 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.235748 kubelet[3080]: W0430 03:51:29.235715 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.235748 kubelet[3080]: E0430 03:51:29.235727 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.254340 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3566) Apr 30 03:51:29.280359 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3566) Apr 30 03:51:29.322041 containerd[1822]: time="2025-04-30T03:51:29.322017965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9b5cd8d6-nvqj9,Uid:4456fe9f-e907-4b7f-bfe4-575a1d8f0d31,Namespace:calico-system,Attempt:0,}" Apr 30 03:51:29.331430 kubelet[3080]: E0430 03:51:29.331391 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.331430 kubelet[3080]: W0430 03:51:29.331402 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.331430 kubelet[3080]: E0430 03:51:29.331414 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.331587 kubelet[3080]: E0430 03:51:29.331554 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.331587 kubelet[3080]: W0430 03:51:29.331560 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.331587 kubelet[3080]: E0430 03:51:29.331568 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.331713 kubelet[3080]: E0430 03:51:29.331705 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.331713 kubelet[3080]: W0430 03:51:29.331712 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.331768 kubelet[3080]: E0430 03:51:29.331719 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.331859 kubelet[3080]: E0430 03:51:29.331851 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.331880 kubelet[3080]: W0430 03:51:29.331858 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.331880 kubelet[3080]: E0430 03:51:29.331866 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.331960 kubelet[3080]: E0430 03:51:29.331954 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.331960 kubelet[3080]: W0430 03:51:29.331959 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.331995 kubelet[3080]: E0430 03:51:29.331964 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332072 kubelet[3080]: E0430 03:51:29.332064 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332093 kubelet[3080]: W0430 03:51:29.332072 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332093 kubelet[3080]: E0430 03:51:29.332080 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332197 kubelet[3080]: E0430 03:51:29.332192 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332214 kubelet[3080]: W0430 03:51:29.332198 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332214 kubelet[3080]: E0430 03:51:29.332207 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332294 kubelet[3080]: E0430 03:51:29.332289 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332311 kubelet[3080]: W0430 03:51:29.332294 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332311 kubelet[3080]: E0430 03:51:29.332302 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332439 kubelet[3080]: E0430 03:51:29.332433 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332462 kubelet[3080]: W0430 03:51:29.332440 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332462 kubelet[3080]: E0430 03:51:29.332448 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332572 kubelet[3080]: E0430 03:51:29.332565 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332589 kubelet[3080]: W0430 03:51:29.332572 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332589 kubelet[3080]: E0430 03:51:29.332581 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332696 kubelet[3080]: E0430 03:51:29.332690 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332713 kubelet[3080]: W0430 03:51:29.332697 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332713 kubelet[3080]: E0430 03:51:29.332705 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332841 kubelet[3080]: E0430 03:51:29.332836 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332859 kubelet[3080]: W0430 03:51:29.332842 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332859 kubelet[3080]: E0430 03:51:29.332854 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.332959 kubelet[3080]: E0430 03:51:29.332954 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.332976 kubelet[3080]: W0430 03:51:29.332959 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.332976 kubelet[3080]: E0430 03:51:29.332972 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333075 kubelet[3080]: E0430 03:51:29.333070 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333093 kubelet[3080]: W0430 03:51:29.333074 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333093 kubelet[3080]: E0430 03:51:29.333085 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333158 kubelet[3080]: E0430 03:51:29.333153 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333176 kubelet[3080]: W0430 03:51:29.333158 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333176 kubelet[3080]: E0430 03:51:29.333167 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333248 kubelet[3080]: E0430 03:51:29.333243 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333265 kubelet[3080]: W0430 03:51:29.333248 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333265 kubelet[3080]: E0430 03:51:29.333254 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333393 kubelet[3080]: E0430 03:51:29.333330 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333393 kubelet[3080]: W0430 03:51:29.333335 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333393 kubelet[3080]: E0430 03:51:29.333340 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333463 kubelet[3080]: E0430 03:51:29.333410 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333463 kubelet[3080]: W0430 03:51:29.333414 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333463 kubelet[3080]: E0430 03:51:29.333420 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333512 kubelet[3080]: E0430 03:51:29.333500 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333512 kubelet[3080]: W0430 03:51:29.333504 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333512 kubelet[3080]: E0430 03:51:29.333508 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333627 kubelet[3080]: E0430 03:51:29.333621 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333647 kubelet[3080]: W0430 03:51:29.333626 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333647 kubelet[3080]: E0430 03:51:29.333634 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333722 kubelet[3080]: E0430 03:51:29.333717 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333722 kubelet[3080]: W0430 03:51:29.333722 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333758 kubelet[3080]: E0430 03:51:29.333727 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333829 kubelet[3080]: E0430 03:51:29.333824 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333846 kubelet[3080]: W0430 03:51:29.333829 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333846 kubelet[3080]: E0430 03:51:29.333835 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333913 kubelet[3080]: E0430 03:51:29.333909 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.333933 kubelet[3080]: W0430 03:51:29.333914 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.333933 kubelet[3080]: E0430 03:51:29.333919 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.333993 kubelet[3080]: E0430 03:51:29.333988 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.334013 kubelet[3080]: W0430 03:51:29.333992 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.334013 kubelet[3080]: E0430 03:51:29.333997 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.334122 kubelet[3080]: E0430 03:51:29.334117 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.334122 kubelet[3080]: W0430 03:51:29.334122 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.334157 kubelet[3080]: E0430 03:51:29.334126 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.336519 containerd[1822]: time="2025-04-30T03:51:29.336500357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tsssm,Uid:714bdd7b-5e5c-4205-ac4e-919b3d81d318,Namespace:calico-system,Attempt:0,}" Apr 30 03:51:29.338340 kubelet[3080]: E0430 03:51:29.338296 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:29.338340 kubelet[3080]: W0430 03:51:29.338304 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:29.338340 kubelet[3080]: E0430 03:51:29.338311 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:29.359060 containerd[1822]: time="2025-04-30T03:51:29.359008784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:29.359060 containerd[1822]: time="2025-04-30T03:51:29.359040759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:29.359060 containerd[1822]: time="2025-04-30T03:51:29.359050924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:29.359161 containerd[1822]: time="2025-04-30T03:51:29.359104558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:29.361850 containerd[1822]: time="2025-04-30T03:51:29.361778953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:29.361850 containerd[1822]: time="2025-04-30T03:51:29.361817256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:29.361850 containerd[1822]: time="2025-04-30T03:51:29.361825511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:29.362076 containerd[1822]: time="2025-04-30T03:51:29.362038126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:29.386837 systemd[1]: Started cri-containerd-99e747e0b2cfce40700f5914616923e53178b78fee330404f9d699d0d545a280.scope - libcontainer container 99e747e0b2cfce40700f5914616923e53178b78fee330404f9d699d0d545a280. Apr 30 03:51:29.394903 systemd[1]: Started cri-containerd-c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348.scope - libcontainer container c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348. Apr 30 03:51:29.444863 containerd[1822]: time="2025-04-30T03:51:29.444763859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tsssm,Uid:714bdd7b-5e5c-4205-ac4e-919b3d81d318,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348\"" Apr 30 03:51:29.446733 containerd[1822]: time="2025-04-30T03:51:29.446686569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:51:29.458282 containerd[1822]: time="2025-04-30T03:51:29.458261141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c9b5cd8d6-nvqj9,Uid:4456fe9f-e907-4b7f-bfe4-575a1d8f0d31,Namespace:calico-system,Attempt:0,} returns sandbox id \"99e747e0b2cfce40700f5914616923e53178b78fee330404f9d699d0d545a280\"" Apr 30 03:51:30.417871 kubelet[3080]: E0430 03:51:30.417776 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.417871 kubelet[3080]: W0430 03:51:30.417822 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.417871 kubelet[3080]: E0430 03:51:30.417863 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.418904 kubelet[3080]: E0430 03:51:30.418458 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.418904 kubelet[3080]: W0430 03:51:30.418498 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.418904 kubelet[3080]: E0430 03:51:30.418532 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.419170 kubelet[3080]: E0430 03:51:30.419120 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.419170 kubelet[3080]: W0430 03:51:30.419156 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.419378 kubelet[3080]: E0430 03:51:30.419190 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.419990 kubelet[3080]: E0430 03:51:30.419914 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.419990 kubelet[3080]: W0430 03:51:30.419952 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.419990 kubelet[3080]: E0430 03:51:30.419987 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.420662 kubelet[3080]: E0430 03:51:30.420578 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.420662 kubelet[3080]: W0430 03:51:30.420617 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.420662 kubelet[3080]: E0430 03:51:30.420652 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.421271 kubelet[3080]: E0430 03:51:30.421213 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.421271 kubelet[3080]: W0430 03:51:30.421252 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.421503 kubelet[3080]: E0430 03:51:30.421288 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.421971 kubelet[3080]: E0430 03:51:30.421879 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.421971 kubelet[3080]: W0430 03:51:30.421918 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.421971 kubelet[3080]: E0430 03:51:30.421952 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.422711 kubelet[3080]: E0430 03:51:30.422635 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.422711 kubelet[3080]: W0430 03:51:30.422676 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.422711 kubelet[3080]: E0430 03:51:30.422711 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.423358 kubelet[3080]: E0430 03:51:30.423279 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.423358 kubelet[3080]: W0430 03:51:30.423308 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.423358 kubelet[3080]: E0430 03:51:30.423354 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.424007 kubelet[3080]: E0430 03:51:30.423905 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.424007 kubelet[3080]: W0430 03:51:30.423942 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.424007 kubelet[3080]: E0430 03:51:30.423975 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.424634 kubelet[3080]: E0430 03:51:30.424539 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.424634 kubelet[3080]: W0430 03:51:30.424579 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.424634 kubelet[3080]: E0430 03:51:30.424614 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.425263 kubelet[3080]: E0430 03:51:30.425207 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.425263 kubelet[3080]: W0430 03:51:30.425245 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.425515 kubelet[3080]: E0430 03:51:30.425279 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.426003 kubelet[3080]: E0430 03:51:30.425907 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.426003 kubelet[3080]: W0430 03:51:30.425944 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.426003 kubelet[3080]: E0430 03:51:30.425979 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.426783 kubelet[3080]: E0430 03:51:30.426690 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.426783 kubelet[3080]: W0430 03:51:30.426737 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.426783 kubelet[3080]: E0430 03:51:30.426785 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.427554 kubelet[3080]: E0430 03:51:30.427466 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.427554 kubelet[3080]: W0430 03:51:30.427509 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.427554 kubelet[3080]: E0430 03:51:30.427548 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.428246 kubelet[3080]: E0430 03:51:30.428163 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.428246 kubelet[3080]: W0430 03:51:30.428202 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.428246 kubelet[3080]: E0430 03:51:30.428238 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.428898 kubelet[3080]: E0430 03:51:30.428813 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.428898 kubelet[3080]: W0430 03:51:30.428858 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.429155 kubelet[3080]: E0430 03:51:30.428907 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.429500 kubelet[3080]: E0430 03:51:30.429431 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.429500 kubelet[3080]: W0430 03:51:30.429462 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.429500 kubelet[3080]: E0430 03:51:30.429492 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.430094 kubelet[3080]: E0430 03:51:30.430010 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.430094 kubelet[3080]: W0430 03:51:30.430048 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.430094 kubelet[3080]: E0430 03:51:30.430084 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.430677 kubelet[3080]: E0430 03:51:30.430633 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.430677 kubelet[3080]: W0430 03:51:30.430664 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.431031 kubelet[3080]: E0430 03:51:30.430695 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.431387 kubelet[3080]: E0430 03:51:30.431271 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.431387 kubelet[3080]: W0430 03:51:30.431341 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.431734 kubelet[3080]: E0430 03:51:30.431392 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.431955 kubelet[3080]: E0430 03:51:30.431917 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.431955 kubelet[3080]: W0430 03:51:30.431944 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.432197 kubelet[3080]: E0430 03:51:30.431974 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.432507 kubelet[3080]: E0430 03:51:30.432423 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.432507 kubelet[3080]: W0430 03:51:30.432459 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.432507 kubelet[3080]: E0430 03:51:30.432487 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.433112 kubelet[3080]: E0430 03:51:30.433053 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.433112 kubelet[3080]: W0430 03:51:30.433091 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.433380 kubelet[3080]: E0430 03:51:30.433126 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:30.433816 kubelet[3080]: E0430 03:51:30.433738 3080 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:51:30.433816 kubelet[3080]: W0430 03:51:30.433776 3080 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:51:30.433816 kubelet[3080]: E0430 03:51:30.433812 3080 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:51:31.055722 kubelet[3080]: E0430 03:51:31.055656 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9dtx" podUID="0dca7275-6863-404d-9bdb-986dfca9c849" Apr 30 03:51:31.095568 containerd[1822]: time="2025-04-30T03:51:31.095517583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:31.095763 containerd[1822]: time="2025-04-30T03:51:31.095742749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:51:31.096085 containerd[1822]: time="2025-04-30T03:51:31.096045462Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:31.097176 containerd[1822]: time="2025-04-30T03:51:31.097135831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:31.097645 containerd[1822]: time="2025-04-30T03:51:31.097601813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.650890467s" Apr 30 03:51:31.097645 containerd[1822]: time="2025-04-30T03:51:31.097618520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:51:31.098113 containerd[1822]: time="2025-04-30T03:51:31.098072987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:51:31.098615 containerd[1822]: time="2025-04-30T03:51:31.098601484Z" level=info msg="CreateContainer within sandbox \"c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:51:31.103309 containerd[1822]: time="2025-04-30T03:51:31.103270480Z" level=info msg="CreateContainer within sandbox \"c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531\"" Apr 30 03:51:31.103563 containerd[1822]: time="2025-04-30T03:51:31.103506719Z" level=info msg="StartContainer for \"41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531\"" Apr 30 03:51:31.127495 systemd[1]: Started cri-containerd-41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531.scope - libcontainer container 41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531. Apr 30 03:51:31.141109 containerd[1822]: time="2025-04-30T03:51:31.141086732Z" level=info msg="StartContainer for \"41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531\" returns successfully" Apr 30 03:51:31.147063 systemd[1]: cri-containerd-41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531.scope: Deactivated successfully. Apr 30 03:51:31.233712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531-rootfs.mount: Deactivated successfully. Apr 30 03:51:31.369725 containerd[1822]: time="2025-04-30T03:51:31.369633145Z" level=info msg="shim disconnected" id=41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531 namespace=k8s.io Apr 30 03:51:31.369725 containerd[1822]: time="2025-04-30T03:51:31.369662148Z" level=warning msg="cleaning up after shim disconnected" id=41ba11323b946bd037f0506c39399153614f716890dadcf71c4934a19e08b531 namespace=k8s.io Apr 30 03:51:31.369725 containerd[1822]: time="2025-04-30T03:51:31.369668485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:51:33.055943 kubelet[3080]: E0430 03:51:33.055806 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9dtx" podUID="0dca7275-6863-404d-9bdb-986dfca9c849" Apr 30 03:51:33.768660 containerd[1822]: time="2025-04-30T03:51:33.768636810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:33.768971 containerd[1822]: time="2025-04-30T03:51:33.768949578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:51:33.769324 containerd[1822]: time="2025-04-30T03:51:33.769305607Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:33.770270 containerd[1822]: time="2025-04-30T03:51:33.770256470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:33.770732 containerd[1822]: time="2025-04-30T03:51:33.770719236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.672633435s" Apr 30 03:51:33.770774 containerd[1822]: time="2025-04-30T03:51:33.770734151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:51:33.771238 containerd[1822]: time="2025-04-30T03:51:33.771226686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:51:33.774162 containerd[1822]: time="2025-04-30T03:51:33.774145738Z" level=info msg="CreateContainer within sandbox \"99e747e0b2cfce40700f5914616923e53178b78fee330404f9d699d0d545a280\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:51:33.778899 containerd[1822]: time="2025-04-30T03:51:33.778853309Z" level=info msg="CreateContainer within sandbox \"99e747e0b2cfce40700f5914616923e53178b78fee330404f9d699d0d545a280\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ec451121185072b7541751a211e6a966748471a5c792c1e726b9c295e3a02d52\"" Apr 30 03:51:33.779139 containerd[1822]: time="2025-04-30T03:51:33.779108230Z" level=info msg="StartContainer for \"ec451121185072b7541751a211e6a966748471a5c792c1e726b9c295e3a02d52\"" Apr 30 03:51:33.805494 systemd[1]: Started cri-containerd-ec451121185072b7541751a211e6a966748471a5c792c1e726b9c295e3a02d52.scope - libcontainer container ec451121185072b7541751a211e6a966748471a5c792c1e726b9c295e3a02d52. Apr 30 03:51:33.830818 containerd[1822]: time="2025-04-30T03:51:33.830794098Z" level=info msg="StartContainer for \"ec451121185072b7541751a211e6a966748471a5c792c1e726b9c295e3a02d52\" returns successfully" Apr 30 03:51:34.128550 kubelet[3080]: I0430 03:51:34.128286 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c9b5cd8d6-nvqj9" podStartSLOduration=1.815868864 podStartE2EDuration="6.128249475s" podCreationTimestamp="2025-04-30 03:51:28 +0000 UTC" firstStartedPulling="2025-04-30 03:51:29.458777304 +0000 UTC m=+11.472624857" lastFinishedPulling="2025-04-30 03:51:33.771157909 +0000 UTC m=+15.785005468" observedRunningTime="2025-04-30 03:51:34.127838874 +0000 UTC m=+16.141686520" watchObservedRunningTime="2025-04-30 03:51:34.128249475 +0000 UTC m=+16.142097101" Apr 30 03:51:35.055895 kubelet[3080]: E0430 03:51:35.055851 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9dtx" podUID="0dca7275-6863-404d-9bdb-986dfca9c849" Apr 30 03:51:35.108730 kubelet[3080]: I0430 03:51:35.108637 3080 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:51:37.056053 kubelet[3080]: E0430 03:51:37.056012 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9dtx" podUID="0dca7275-6863-404d-9bdb-986dfca9c849" Apr 30 03:51:37.655288 containerd[1822]: time="2025-04-30T03:51:37.655234203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:37.655487 containerd[1822]: time="2025-04-30T03:51:37.655430562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:51:37.655901 containerd[1822]: time="2025-04-30T03:51:37.655860247Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:37.656911 containerd[1822]: time="2025-04-30T03:51:37.656870530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:37.657332 containerd[1822]: time="2025-04-30T03:51:37.657288595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.886044969s" Apr 30 03:51:37.657332 containerd[1822]: time="2025-04-30T03:51:37.657304485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:51:37.658230 containerd[1822]: time="2025-04-30T03:51:37.658214977Z" level=info msg="CreateContainer within sandbox \"c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:51:37.662878 containerd[1822]: time="2025-04-30T03:51:37.662838221Z" level=info msg="CreateContainer within sandbox \"c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412\"" Apr 30 03:51:37.663097 containerd[1822]: time="2025-04-30T03:51:37.663082612Z" level=info msg="StartContainer for \"c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412\"" Apr 30 03:51:37.689639 systemd[1]: Started cri-containerd-c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412.scope - libcontainer container c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412. Apr 30 03:51:37.703304 containerd[1822]: time="2025-04-30T03:51:37.703280842Z" level=info msg="StartContainer for \"c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412\" returns successfully" Apr 30 03:51:38.210550 systemd[1]: cri-containerd-c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412.scope: Deactivated successfully. Apr 30 03:51:38.220777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412-rootfs.mount: Deactivated successfully. Apr 30 03:51:38.303042 kubelet[3080]: I0430 03:51:38.302989 3080 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 03:51:38.366093 systemd[1]: Created slice kubepods-burstable-podbe9545ce_5a42_48c9_a431_25d956e9ac4c.slice - libcontainer container kubepods-burstable-podbe9545ce_5a42_48c9_a431_25d956e9ac4c.slice. Apr 30 03:51:38.381047 systemd[1]: Created slice kubepods-burstable-pod68419564_d459_4b14_8200_20e2e4f891a1.slice - libcontainer container kubepods-burstable-pod68419564_d459_4b14_8200_20e2e4f891a1.slice. Apr 30 03:51:38.386895 systemd[1]: Created slice kubepods-besteffort-podde4043d1_b1f6_436a_bcda_d9e4b8fb70cc.slice - libcontainer container kubepods-besteffort-podde4043d1_b1f6_436a_bcda_d9e4b8fb70cc.slice. Apr 30 03:51:38.390727 systemd[1]: Created slice kubepods-besteffort-pod3653e43a_3062_4f92_85f9_7277b6be6efd.slice - libcontainer container kubepods-besteffort-pod3653e43a_3062_4f92_85f9_7277b6be6efd.slice. Apr 30 03:51:38.394193 kubelet[3080]: I0430 03:51:38.394172 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68419564-d459-4b14-8200-20e2e4f891a1-config-volume\") pod \"coredns-668d6bf9bc-j6zgx\" (UID: \"68419564-d459-4b14-8200-20e2e4f891a1\") " pod="kube-system/coredns-668d6bf9bc-j6zgx" Apr 30 03:51:38.394340 kubelet[3080]: I0430 03:51:38.394200 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kjf\" (UniqueName: \"kubernetes.io/projected/68419564-d459-4b14-8200-20e2e4f891a1-kube-api-access-p4kjf\") pod \"coredns-668d6bf9bc-j6zgx\" (UID: \"68419564-d459-4b14-8200-20e2e4f891a1\") " pod="kube-system/coredns-668d6bf9bc-j6zgx" Apr 30 03:51:38.394340 kubelet[3080]: I0430 03:51:38.394219 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkc5v\" (UniqueName: \"kubernetes.io/projected/3653e43a-3062-4f92-85f9-7277b6be6efd-kube-api-access-lkc5v\") pod \"calico-apiserver-75f647cfb9-cr4br\" (UID: \"3653e43a-3062-4f92-85f9-7277b6be6efd\") " pod="calico-apiserver/calico-apiserver-75f647cfb9-cr4br" Apr 30 03:51:38.394340 kubelet[3080]: I0430 03:51:38.394235 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3653e43a-3062-4f92-85f9-7277b6be6efd-calico-apiserver-certs\") pod \"calico-apiserver-75f647cfb9-cr4br\" (UID: \"3653e43a-3062-4f92-85f9-7277b6be6efd\") " pod="calico-apiserver/calico-apiserver-75f647cfb9-cr4br" Apr 30 03:51:38.394471 systemd[1]: Created slice kubepods-besteffort-pod2548eb5a_d1a4_481d_a809_0da6b01dba3d.slice - libcontainer container kubepods-besteffort-pod2548eb5a_d1a4_481d_a809_0da6b01dba3d.slice. Apr 30 03:51:38.494604 kubelet[3080]: I0430 03:51:38.494495 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2548eb5a-d1a4-481d-a809-0da6b01dba3d-calico-apiserver-certs\") pod \"calico-apiserver-75f647cfb9-n8bhw\" (UID: \"2548eb5a-d1a4-481d-a809-0da6b01dba3d\") " pod="calico-apiserver/calico-apiserver-75f647cfb9-n8bhw" Apr 30 03:51:38.494604 kubelet[3080]: I0430 03:51:38.494562 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9545ce-5a42-48c9-a431-25d956e9ac4c-config-volume\") pod \"coredns-668d6bf9bc-58hpw\" (UID: \"be9545ce-5a42-48c9-a431-25d956e9ac4c\") " pod="kube-system/coredns-668d6bf9bc-58hpw" Apr 30 03:51:38.494852 kubelet[3080]: I0430 03:51:38.494609 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdcz\" (UniqueName: \"kubernetes.io/projected/be9545ce-5a42-48c9-a431-25d956e9ac4c-kube-api-access-8qdcz\") pod \"coredns-668d6bf9bc-58hpw\" (UID: \"be9545ce-5a42-48c9-a431-25d956e9ac4c\") " pod="kube-system/coredns-668d6bf9bc-58hpw" Apr 30 03:51:38.494852 kubelet[3080]: I0430 03:51:38.494634 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzb6x\" (UniqueName: \"kubernetes.io/projected/de4043d1-b1f6-436a-bcda-d9e4b8fb70cc-kube-api-access-rzb6x\") pod \"calico-kube-controllers-64f5c844c6-4n2h7\" (UID: \"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc\") " pod="calico-system/calico-kube-controllers-64f5c844c6-4n2h7" Apr 30 03:51:38.494852 kubelet[3080]: I0430 03:51:38.494670 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72xmw\" (UniqueName: \"kubernetes.io/projected/2548eb5a-d1a4-481d-a809-0da6b01dba3d-kube-api-access-72xmw\") pod \"calico-apiserver-75f647cfb9-n8bhw\" (UID: \"2548eb5a-d1a4-481d-a809-0da6b01dba3d\") " pod="calico-apiserver/calico-apiserver-75f647cfb9-n8bhw" Apr 30 03:51:38.494852 kubelet[3080]: I0430 03:51:38.494713 3080 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4043d1-b1f6-436a-bcda-d9e4b8fb70cc-tigera-ca-bundle\") pod \"calico-kube-controllers-64f5c844c6-4n2h7\" (UID: \"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc\") " pod="calico-system/calico-kube-controllers-64f5c844c6-4n2h7" Apr 30 03:51:38.674837 containerd[1822]: time="2025-04-30T03:51:38.674694084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58hpw,Uid:be9545ce-5a42-48c9-a431-25d956e9ac4c,Namespace:kube-system,Attempt:0,}" Apr 30 03:51:38.685197 containerd[1822]: time="2025-04-30T03:51:38.685112827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6zgx,Uid:68419564-d459-4b14-8200-20e2e4f891a1,Namespace:kube-system,Attempt:0,}" Apr 30 03:51:38.690367 containerd[1822]: time="2025-04-30T03:51:38.690251932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f5c844c6-4n2h7,Uid:de4043d1-b1f6-436a-bcda-d9e4b8fb70cc,Namespace:calico-system,Attempt:0,}" Apr 30 03:51:38.693650 containerd[1822]: time="2025-04-30T03:51:38.693618928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-cr4br,Uid:3653e43a-3062-4f92-85f9-7277b6be6efd,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:51:38.696102 containerd[1822]: time="2025-04-30T03:51:38.696067601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-n8bhw,Uid:2548eb5a-d1a4-481d-a809-0da6b01dba3d,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:51:38.861864 containerd[1822]: time="2025-04-30T03:51:38.861771323Z" level=info msg="shim disconnected" id=c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412 namespace=k8s.io Apr 30 03:51:38.861864 containerd[1822]: time="2025-04-30T03:51:38.861802698Z" level=warning msg="cleaning up after shim disconnected" id=c27bc185bdb2916566426ad80d32bc5c05ca26ac8ffd36ea4058c166824a5412 namespace=k8s.io Apr 30 03:51:38.861864 containerd[1822]: time="2025-04-30T03:51:38.861808292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:51:38.893702 containerd[1822]: time="2025-04-30T03:51:38.893654597Z" level=error msg="Failed to destroy network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.893919 containerd[1822]: time="2025-04-30T03:51:38.893757278Z" level=error msg="Failed to destroy network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.893965 containerd[1822]: time="2025-04-30T03:51:38.893931143Z" level=error msg="encountered an error cleaning up failed sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894000 containerd[1822]: time="2025-04-30T03:51:38.893972462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58hpw,Uid:be9545ce-5a42-48c9-a431-25d956e9ac4c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894045 containerd[1822]: time="2025-04-30T03:51:38.893977891Z" level=error msg="encountered an error cleaning up failed sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894081 containerd[1822]: time="2025-04-30T03:51:38.894058682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f5c844c6-4n2h7,Uid:de4043d1-b1f6-436a-bcda-d9e4b8fb70cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894136 containerd[1822]: time="2025-04-30T03:51:38.894109041Z" level=error msg="Failed to destroy network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894162 kubelet[3080]: E0430 03:51:38.894140 3080 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894205 kubelet[3080]: E0430 03:51:38.894193 3080 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-58hpw" Apr 30 03:51:38.894226 kubelet[3080]: E0430 03:51:38.894214 3080 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-58hpw" Apr 30 03:51:38.894276 kubelet[3080]: E0430 03:51:38.894258 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-58hpw_kube-system(be9545ce-5a42-48c9-a431-25d956e9ac4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-58hpw_kube-system(be9545ce-5a42-48c9-a431-25d956e9ac4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-58hpw" podUID="be9545ce-5a42-48c9-a431-25d956e9ac4c" Apr 30 03:51:38.894313 kubelet[3080]: E0430 03:51:38.894139 3080 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894313 kubelet[3080]: E0430 03:51:38.894299 3080 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f5c844c6-4n2h7" Apr 30 03:51:38.894385 containerd[1822]: time="2025-04-30T03:51:38.894290106Z" level=error msg="encountered an error cleaning up failed sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894385 containerd[1822]: time="2025-04-30T03:51:38.894325182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6zgx,Uid:68419564-d459-4b14-8200-20e2e4f891a1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894433 kubelet[3080]: E0430 03:51:38.894313 3080 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f5c844c6-4n2h7" Apr 30 03:51:38.894433 kubelet[3080]: E0430 03:51:38.894350 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64f5c844c6-4n2h7_calico-system(de4043d1-b1f6-436a-bcda-d9e4b8fb70cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64f5c844c6-4n2h7_calico-system(de4043d1-b1f6-436a-bcda-d9e4b8fb70cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64f5c844c6-4n2h7" podUID="de4043d1-b1f6-436a-bcda-d9e4b8fb70cc" Apr 30 03:51:38.894433 kubelet[3080]: E0430 03:51:38.894413 3080 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894500 kubelet[3080]: E0430 03:51:38.894437 3080 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j6zgx" Apr 30 03:51:38.894500 kubelet[3080]: E0430 03:51:38.894453 3080 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j6zgx" Apr 30 03:51:38.894500 kubelet[3080]: E0430 03:51:38.894478 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j6zgx_kube-system(68419564-d459-4b14-8200-20e2e4f891a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j6zgx_kube-system(68419564-d459-4b14-8200-20e2e4f891a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j6zgx" podUID="68419564-d459-4b14-8200-20e2e4f891a1" Apr 30 03:51:38.894804 containerd[1822]: time="2025-04-30T03:51:38.894788728Z" level=error msg="Failed to destroy network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894939 containerd[1822]: time="2025-04-30T03:51:38.894926338Z" level=error msg="encountered an error cleaning up failed sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.894963 containerd[1822]: time="2025-04-30T03:51:38.894949486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-cr4br,Uid:3653e43a-3062-4f92-85f9-7277b6be6efd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.895022 kubelet[3080]: E0430 03:51:38.895012 3080 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.895043 kubelet[3080]: E0430 03:51:38.895030 3080 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f647cfb9-cr4br" Apr 30 03:51:38.895061 kubelet[3080]: E0430 03:51:38.895041 3080 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f647cfb9-cr4br" Apr 30 03:51:38.895080 kubelet[3080]: E0430 03:51:38.895057 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75f647cfb9-cr4br_calico-apiserver(3653e43a-3062-4f92-85f9-7277b6be6efd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75f647cfb9-cr4br_calico-apiserver(3653e43a-3062-4f92-85f9-7277b6be6efd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f647cfb9-cr4br" podUID="3653e43a-3062-4f92-85f9-7277b6be6efd" Apr 30 03:51:38.895332 containerd[1822]: time="2025-04-30T03:51:38.895311424Z" level=error msg="Failed to destroy network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.895475 containerd[1822]: time="2025-04-30T03:51:38.895462983Z" level=error msg="encountered an error cleaning up failed sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.895498 containerd[1822]: time="2025-04-30T03:51:38.895482673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-n8bhw,Uid:2548eb5a-d1a4-481d-a809-0da6b01dba3d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.895600 kubelet[3080]: E0430 03:51:38.895563 3080 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:38.895638 kubelet[3080]: E0430 03:51:38.895607 3080 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f647cfb9-n8bhw" Apr 30 03:51:38.895660 kubelet[3080]: E0430 03:51:38.895619 3080 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f647cfb9-n8bhw" Apr 30 03:51:38.895660 kubelet[3080]: E0430 03:51:38.895652 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75f647cfb9-n8bhw_calico-apiserver(2548eb5a-d1a4-481d-a809-0da6b01dba3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75f647cfb9-n8bhw_calico-apiserver(2548eb5a-d1a4-481d-a809-0da6b01dba3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f647cfb9-n8bhw" podUID="2548eb5a-d1a4-481d-a809-0da6b01dba3d" Apr 30 03:51:39.071007 systemd[1]: Created slice kubepods-besteffort-pod0dca7275_6863_404d_9bdb_986dfca9c849.slice - libcontainer container kubepods-besteffort-pod0dca7275_6863_404d_9bdb_986dfca9c849.slice. Apr 30 03:51:39.076697 containerd[1822]: time="2025-04-30T03:51:39.076585109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9dtx,Uid:0dca7275-6863-404d-9bdb-986dfca9c849,Namespace:calico-system,Attempt:0,}" Apr 30 03:51:39.107230 containerd[1822]: time="2025-04-30T03:51:39.107172402Z" level=error msg="Failed to destroy network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.107394 containerd[1822]: time="2025-04-30T03:51:39.107349588Z" level=error msg="encountered an error cleaning up failed sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.107394 containerd[1822]: time="2025-04-30T03:51:39.107381984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9dtx,Uid:0dca7275-6863-404d-9bdb-986dfca9c849,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.107569 kubelet[3080]: E0430 03:51:39.107522 3080 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.107569 kubelet[3080]: E0430 03:51:39.107556 3080 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t9dtx" Apr 30 03:51:39.107633 kubelet[3080]: E0430 03:51:39.107571 3080 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t9dtx" Apr 30 03:51:39.107633 kubelet[3080]: E0430 03:51:39.107596 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t9dtx_calico-system(0dca7275-6863-404d-9bdb-986dfca9c849)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t9dtx_calico-system(0dca7275-6863-404d-9bdb-986dfca9c849)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t9dtx" podUID="0dca7275-6863-404d-9bdb-986dfca9c849" Apr 30 03:51:39.115160 kubelet[3080]: I0430 03:51:39.115094 3080 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:51:39.115457 containerd[1822]: time="2025-04-30T03:51:39.115431021Z" level=info msg="StopPodSandbox for \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\"" Apr 30 03:51:39.115543 containerd[1822]: time="2025-04-30T03:51:39.115525464Z" level=info msg="Ensure that sandbox ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da in task-service has been cleanup successfully" Apr 30 03:51:39.115575 kubelet[3080]: I0430 03:51:39.115553 3080 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:51:39.115781 containerd[1822]: time="2025-04-30T03:51:39.115768459Z" level=info msg="StopPodSandbox for \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\"" Apr 30 03:51:39.115908 containerd[1822]: time="2025-04-30T03:51:39.115893767Z" level=info msg="Ensure that sandbox fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15 in task-service has been cleanup successfully" Apr 30 03:51:39.116045 kubelet[3080]: I0430 03:51:39.116037 3080 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:51:39.116254 containerd[1822]: time="2025-04-30T03:51:39.116235435Z" level=info msg="StopPodSandbox for \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\"" Apr 30 03:51:39.116349 containerd[1822]: time="2025-04-30T03:51:39.116339060Z" level=info msg="Ensure that sandbox a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0 in task-service has been cleanup successfully" Apr 30 03:51:39.116579 kubelet[3080]: I0430 03:51:39.116567 3080 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:51:39.116809 containerd[1822]: time="2025-04-30T03:51:39.116792475Z" level=info msg="StopPodSandbox for \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\"" Apr 30 03:51:39.116896 containerd[1822]: time="2025-04-30T03:51:39.116884048Z" level=info msg="Ensure that sandbox b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901 in task-service has been cleanup successfully" Apr 30 03:51:39.117082 kubelet[3080]: I0430 03:51:39.117072 3080 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:51:39.117370 containerd[1822]: time="2025-04-30T03:51:39.117350549Z" level=info msg="StopPodSandbox for \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\"" Apr 30 03:51:39.117500 containerd[1822]: time="2025-04-30T03:51:39.117487134Z" level=info msg="Ensure that sandbox 1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89 in task-service has been cleanup successfully" Apr 30 03:51:39.118860 kubelet[3080]: I0430 03:51:39.118839 3080 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:51:39.119048 containerd[1822]: time="2025-04-30T03:51:39.119021485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:51:39.119182 containerd[1822]: time="2025-04-30T03:51:39.119168727Z" level=info msg="StopPodSandbox for \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\"" Apr 30 03:51:39.119294 containerd[1822]: time="2025-04-30T03:51:39.119283323Z" level=info msg="Ensure that sandbox 98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79 in task-service has been cleanup successfully" Apr 30 03:51:39.133570 containerd[1822]: time="2025-04-30T03:51:39.133533928Z" level=error msg="StopPodSandbox for \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\" failed" error="failed to destroy network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.133684 kubelet[3080]: E0430 03:51:39.133660 3080 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:51:39.133733 kubelet[3080]: E0430 03:51:39.133702 3080 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da"} Apr 30 03:51:39.133759 containerd[1822]: time="2025-04-30T03:51:39.133698191Z" level=error msg="StopPodSandbox for \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\" failed" error="failed to destroy network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.133781 kubelet[3080]: E0430 03:51:39.133742 3080 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0dca7275-6863-404d-9bdb-986dfca9c849\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:51:39.133781 kubelet[3080]: E0430 03:51:39.133757 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0dca7275-6863-404d-9bdb-986dfca9c849\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t9dtx" podUID="0dca7275-6863-404d-9bdb-986dfca9c849" Apr 30 03:51:39.133781 kubelet[3080]: E0430 03:51:39.133766 3080 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:51:39.133868 kubelet[3080]: E0430 03:51:39.133783 3080 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15"} Apr 30 03:51:39.133868 kubelet[3080]: E0430 03:51:39.133799 3080 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2548eb5a-d1a4-481d-a809-0da6b01dba3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:51:39.133868 kubelet[3080]: E0430 03:51:39.133810 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2548eb5a-d1a4-481d-a809-0da6b01dba3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f647cfb9-n8bhw" podUID="2548eb5a-d1a4-481d-a809-0da6b01dba3d" Apr 30 03:51:39.135193 containerd[1822]: time="2025-04-30T03:51:39.135174093Z" level=error msg="StopPodSandbox for \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\" failed" error="failed to destroy network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.135277 kubelet[3080]: E0430 03:51:39.135263 3080 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:51:39.135304 kubelet[3080]: E0430 03:51:39.135283 3080 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89"} Apr 30 03:51:39.135304 kubelet[3080]: E0430 03:51:39.135300 3080 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68419564-d459-4b14-8200-20e2e4f891a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:51:39.135367 kubelet[3080]: E0430 03:51:39.135313 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68419564-d459-4b14-8200-20e2e4f891a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j6zgx" podUID="68419564-d459-4b14-8200-20e2e4f891a1" Apr 30 03:51:39.135445 containerd[1822]: time="2025-04-30T03:51:39.135432564Z" level=error msg="StopPodSandbox for \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\" failed" error="failed to destroy network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.135507 kubelet[3080]: E0430 03:51:39.135496 3080 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:51:39.135527 kubelet[3080]: E0430 03:51:39.135511 3080 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0"} Apr 30 03:51:39.135545 kubelet[3080]: E0430 03:51:39.135525 3080 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3653e43a-3062-4f92-85f9-7277b6be6efd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:51:39.135545 kubelet[3080]: E0430 03:51:39.135535 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3653e43a-3062-4f92-85f9-7277b6be6efd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f647cfb9-cr4br" podUID="3653e43a-3062-4f92-85f9-7277b6be6efd" Apr 30 03:51:39.136134 containerd[1822]: time="2025-04-30T03:51:39.136120614Z" level=error msg="StopPodSandbox for \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\" failed" error="failed to destroy network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.136194 kubelet[3080]: E0430 03:51:39.136181 3080 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:51:39.136214 kubelet[3080]: E0430 03:51:39.136199 3080 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901"} Apr 30 03:51:39.136230 kubelet[3080]: E0430 03:51:39.136213 3080 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:51:39.136230 kubelet[3080]: E0430 03:51:39.136223 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64f5c844c6-4n2h7" podUID="de4043d1-b1f6-436a-bcda-d9e4b8fb70cc" Apr 30 03:51:39.138155 containerd[1822]: time="2025-04-30T03:51:39.138140139Z" level=error msg="StopPodSandbox for \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\" failed" error="failed to destroy network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:51:39.138219 kubelet[3080]: E0430 03:51:39.138204 3080 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:51:39.138244 kubelet[3080]: E0430 03:51:39.138223 3080 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79"} Apr 30 03:51:39.138244 kubelet[3080]: E0430 03:51:39.138239 3080 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be9545ce-5a42-48c9-a431-25d956e9ac4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:51:39.138299 kubelet[3080]: E0430 03:51:39.138250 3080 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be9545ce-5a42-48c9-a431-25d956e9ac4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-58hpw" podUID="be9545ce-5a42-48c9-a431-25d956e9ac4c" Apr 30 03:51:39.667691 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15-shm.mount: Deactivated successfully. Apr 30 03:51:39.667746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0-shm.mount: Deactivated successfully. Apr 30 03:51:39.667778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901-shm.mount: Deactivated successfully. Apr 30 03:51:39.667808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89-shm.mount: Deactivated successfully. Apr 30 03:51:39.667838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79-shm.mount: Deactivated successfully. Apr 30 03:51:44.470593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34168829.mount: Deactivated successfully. Apr 30 03:51:44.491287 containerd[1822]: time="2025-04-30T03:51:44.491259383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:44.491498 containerd[1822]: time="2025-04-30T03:51:44.491473442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:51:44.491798 containerd[1822]: time="2025-04-30T03:51:44.491785519Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:44.492703 containerd[1822]: time="2025-04-30T03:51:44.492691665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:44.493321 containerd[1822]: time="2025-04-30T03:51:44.493306436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 5.374251148s" Apr 30 03:51:44.493366 containerd[1822]: time="2025-04-30T03:51:44.493324649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:51:44.496932 containerd[1822]: time="2025-04-30T03:51:44.496915052Z" level=info msg="CreateContainer within sandbox \"c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:51:44.503673 containerd[1822]: time="2025-04-30T03:51:44.503653877Z" level=info msg="CreateContainer within sandbox \"c0f56f920e4d1cb99f2d7518e9da15cbeb2ff74ca7a4038a20ea62c1f1142348\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1e6b91921e755588a4c8b47635bb6d64ab4827e349b0a686ac9b5ac010331475\"" Apr 30 03:51:44.503985 containerd[1822]: time="2025-04-30T03:51:44.503972308Z" level=info msg="StartContainer for \"1e6b91921e755588a4c8b47635bb6d64ab4827e349b0a686ac9b5ac010331475\"" Apr 30 03:51:44.521433 systemd[1]: Started cri-containerd-1e6b91921e755588a4c8b47635bb6d64ab4827e349b0a686ac9b5ac010331475.scope - libcontainer container 1e6b91921e755588a4c8b47635bb6d64ab4827e349b0a686ac9b5ac010331475. Apr 30 03:51:44.536551 containerd[1822]: time="2025-04-30T03:51:44.536492262Z" level=info msg="StartContainer for \"1e6b91921e755588a4c8b47635bb6d64ab4827e349b0a686ac9b5ac010331475\" returns successfully" Apr 30 03:51:44.611842 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:51:44.611901 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:51:45.175743 kubelet[3080]: I0430 03:51:45.175568 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tsssm" podStartSLOduration=1.128240736 podStartE2EDuration="16.175521994s" podCreationTimestamp="2025-04-30 03:51:29 +0000 UTC" firstStartedPulling="2025-04-30 03:51:29.446482314 +0000 UTC m=+11.460329871" lastFinishedPulling="2025-04-30 03:51:44.493763573 +0000 UTC m=+26.507611129" observedRunningTime="2025-04-30 03:51:45.175521538 +0000 UTC m=+27.189369164" watchObservedRunningTime="2025-04-30 03:51:45.175521994 +0000 UTC m=+27.189369596" Apr 30 03:51:46.145351 kubelet[3080]: I0430 03:51:46.145235 3080 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:51:50.057547 containerd[1822]: time="2025-04-30T03:51:50.057391993Z" level=info msg="StopPodSandbox for \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\"" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.096 [INFO][4820] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.096 [INFO][4820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" iface="eth0" netns="/var/run/netns/cni-fdc15e16-0907-2f73-ae30-f1d03cfb4dd5" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.096 [INFO][4820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" iface="eth0" netns="/var/run/netns/cni-fdc15e16-0907-2f73-ae30-f1d03cfb4dd5" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.096 [INFO][4820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" iface="eth0" netns="/var/run/netns/cni-fdc15e16-0907-2f73-ae30-f1d03cfb4dd5" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.097 [INFO][4820] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.097 [INFO][4820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.106 [INFO][4835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.106 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.106 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.110 [WARNING][4835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.110 [INFO][4835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.111 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:50.113690 containerd[1822]: 2025-04-30 03:51:50.112 [INFO][4820] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:51:50.114002 containerd[1822]: time="2025-04-30T03:51:50.113766508Z" level=info msg="TearDown network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\" successfully" Apr 30 03:51:50.114002 containerd[1822]: time="2025-04-30T03:51:50.113786439Z" level=info msg="StopPodSandbox for \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\" returns successfully" Apr 30 03:51:50.114219 containerd[1822]: time="2025-04-30T03:51:50.114204784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9dtx,Uid:0dca7275-6863-404d-9bdb-986dfca9c849,Namespace:calico-system,Attempt:1,}" Apr 30 03:51:50.115189 systemd[1]: run-netns-cni\x2dfdc15e16\x2d0907\x2d2f73\x2dae30\x2df1d03cfb4dd5.mount: Deactivated successfully. Apr 30 03:51:50.194312 systemd-networkd[1612]: cali137f37f926b: Link UP Apr 30 03:51:50.194479 systemd-networkd[1612]: cali137f37f926b: Gained carrier Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.128 [INFO][4847] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.135 [INFO][4847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0 csi-node-driver- calico-system 0dca7275-6863-404d-9bdb-986dfca9c849 751 0 2025-04-30 03:51:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.3-a-1bdc449bef csi-node-driver-t9dtx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali137f37f926b [] []}} ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.135 [INFO][4847] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.150 [INFO][4869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" HandleID="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.158 [INFO][4869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" HandleID="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-1bdc449bef", "pod":"csi-node-driver-t9dtx", "timestamp":"2025-04-30 03:51:50.150966558 +0000 UTC"}, Hostname:"ci-4081.3.3-a-1bdc449bef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.158 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.158 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.158 [INFO][4869] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-1bdc449bef' Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.160 [INFO][4869] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.162 [INFO][4869] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.165 [INFO][4869] ipam/ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.166 [INFO][4869] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.168 [INFO][4869] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.168 [INFO][4869] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.169 [INFO][4869] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95 Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.171 [INFO][4869] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.182 [INFO][4869] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.129/26] block=192.168.44.128/26 handle="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.182 [INFO][4869] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.129/26] handle="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.182 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:50.201608 containerd[1822]: 2025-04-30 03:51:50.182 [INFO][4869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.129/26] IPv6=[] ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" HandleID="k8s-pod-network.0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.202117 containerd[1822]: 2025-04-30 03:51:50.187 [INFO][4847] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dca7275-6863-404d-9bdb-986dfca9c849", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"", Pod:"csi-node-driver-t9dtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali137f37f926b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:50.202117 containerd[1822]: 2025-04-30 03:51:50.187 [INFO][4847] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.129/32] ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.202117 containerd[1822]: 2025-04-30 03:51:50.187 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali137f37f926b ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.202117 containerd[1822]: 2025-04-30 03:51:50.194 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.202117 containerd[1822]: 2025-04-30 03:51:50.194 [INFO][4847] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dca7275-6863-404d-9bdb-986dfca9c849", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95", Pod:"csi-node-driver-t9dtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali137f37f926b", MAC:"d2:5f:15:7c:f1:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:50.202117 containerd[1822]: 2025-04-30 03:51:50.200 [INFO][4847] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95" Namespace="calico-system" Pod="csi-node-driver-t9dtx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:51:50.211330 containerd[1822]: time="2025-04-30T03:51:50.211254146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:50.211330 containerd[1822]: time="2025-04-30T03:51:50.211301498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:50.211419 containerd[1822]: time="2025-04-30T03:51:50.211331265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:50.211419 containerd[1822]: time="2025-04-30T03:51:50.211371959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:50.226629 systemd[1]: Started cri-containerd-0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95.scope - libcontainer container 0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95. Apr 30 03:51:50.236843 containerd[1822]: time="2025-04-30T03:51:50.236820217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9dtx,Uid:0dca7275-6863-404d-9bdb-986dfca9c849,Namespace:calico-system,Attempt:1,} returns sandbox id \"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95\"" Apr 30 03:51:50.237551 containerd[1822]: time="2025-04-30T03:51:50.237536264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:51:51.058061 containerd[1822]: time="2025-04-30T03:51:51.058028977Z" level=info msg="StopPodSandbox for \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\"" Apr 30 03:51:51.058473 containerd[1822]: time="2025-04-30T03:51:51.058028951Z" level=info msg="StopPodSandbox for \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\"" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.110 [INFO][5005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.110 [INFO][5005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" iface="eth0" netns="/var/run/netns/cni-0ec260f9-a8a2-8882-f9dc-b8be615531c7" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.110 [INFO][5005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" iface="eth0" netns="/var/run/netns/cni-0ec260f9-a8a2-8882-f9dc-b8be615531c7" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" iface="eth0" netns="/var/run/netns/cni-0ec260f9-a8a2-8882-f9dc-b8be615531c7" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.135 [INFO][5039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.135 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.135 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.141 [WARNING][5039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.141 [INFO][5039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.142 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:51.144277 containerd[1822]: 2025-04-30 03:51:51.143 [INFO][5005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:51:51.144773 containerd[1822]: time="2025-04-30T03:51:51.144406918Z" level=info msg="TearDown network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\" successfully" Apr 30 03:51:51.144773 containerd[1822]: time="2025-04-30T03:51:51.144435795Z" level=info msg="StopPodSandbox for \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\" returns successfully" Apr 30 03:51:51.144989 containerd[1822]: time="2025-04-30T03:51:51.144964464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-n8bhw,Uid:2548eb5a-d1a4-481d-a809-0da6b01dba3d,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:51:51.146594 systemd[1]: run-netns-cni\x2d0ec260f9\x2da8a2\x2d8882\x2df9dc\x2db8be615531c7.mount: Deactivated successfully. Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" iface="eth0" netns="/var/run/netns/cni-195613c9-03ac-7938-b0a7-d56aa1b1ea4f" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" iface="eth0" netns="/var/run/netns/cni-195613c9-03ac-7938-b0a7-d56aa1b1ea4f" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" iface="eth0" netns="/var/run/netns/cni-195613c9-03ac-7938-b0a7-d56aa1b1ea4f" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.111 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.135 [INFO][5041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.135 [INFO][5041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.142 [INFO][5041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.147 [WARNING][5041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.147 [INFO][5041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.148 [INFO][5041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:51.150392 containerd[1822]: 2025-04-30 03:51:51.149 [INFO][5006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:51:51.150802 containerd[1822]: time="2025-04-30T03:51:51.150415469Z" level=info msg="TearDown network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\" successfully" Apr 30 03:51:51.150802 containerd[1822]: time="2025-04-30T03:51:51.150429451Z" level=info msg="StopPodSandbox for \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\" returns successfully" Apr 30 03:51:51.150906 containerd[1822]: time="2025-04-30T03:51:51.150802128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58hpw,Uid:be9545ce-5a42-48c9-a431-25d956e9ac4c,Namespace:kube-system,Attempt:1,}" Apr 30 03:51:51.152267 systemd[1]: run-netns-cni\x2d195613c9\x2d03ac\x2d7938\x2db0a7\x2dd56aa1b1ea4f.mount: Deactivated successfully. Apr 30 03:51:51.211090 systemd-networkd[1612]: calibb391bf8a30: Link UP Apr 30 03:51:51.211185 systemd-networkd[1612]: calibb391bf8a30: Gained carrier Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.161 [INFO][5079] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.169 [INFO][5079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0 calico-apiserver-75f647cfb9- calico-apiserver 2548eb5a-d1a4-481d-a809-0da6b01dba3d 759 0 2025-04-30 03:51:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75f647cfb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-1bdc449bef calico-apiserver-75f647cfb9-n8bhw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibb391bf8a30 [] []}} ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.169 [INFO][5079] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.185 [INFO][5123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" HandleID="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.192 [INFO][5123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" HandleID="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307d20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-1bdc449bef", "pod":"calico-apiserver-75f647cfb9-n8bhw", "timestamp":"2025-04-30 03:51:51.185790851 +0000 UTC"}, Hostname:"ci-4081.3.3-a-1bdc449bef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.192 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.192 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.192 [INFO][5123] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-1bdc449bef' Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.194 [INFO][5123] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.197 [INFO][5123] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.200 [INFO][5123] ipam/ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.201 [INFO][5123] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.203 [INFO][5123] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.203 [INFO][5123] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.204 [INFO][5123] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935 Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.206 [INFO][5123] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.209 [INFO][5123] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.130/26] block=192.168.44.128/26 handle="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.209 [INFO][5123] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.130/26] handle="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.209 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:51.234253 containerd[1822]: 2025-04-30 03:51:51.209 [INFO][5123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.130/26] IPv6=[] ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" HandleID="k8s-pod-network.799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.234750 containerd[1822]: 2025-04-30 03:51:51.210 [INFO][5079] cni-plugin/k8s.go 386: Populated endpoint ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2548eb5a-d1a4-481d-a809-0da6b01dba3d", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"", Pod:"calico-apiserver-75f647cfb9-n8bhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb391bf8a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:51.234750 containerd[1822]: 2025-04-30 03:51:51.210 [INFO][5079] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.130/32] ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.234750 containerd[1822]: 2025-04-30 03:51:51.210 [INFO][5079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb391bf8a30 ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.234750 containerd[1822]: 2025-04-30 03:51:51.211 [INFO][5079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.234750 containerd[1822]: 2025-04-30 03:51:51.211 [INFO][5079] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2548eb5a-d1a4-481d-a809-0da6b01dba3d", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935", Pod:"calico-apiserver-75f647cfb9-n8bhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb391bf8a30", MAC:"fe:95:32:11:f7:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:51.234750 containerd[1822]: 2025-04-30 03:51:51.233 [INFO][5079] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-n8bhw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:51:51.243718 containerd[1822]: time="2025-04-30T03:51:51.243460351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:51.243718 containerd[1822]: time="2025-04-30T03:51:51.243680078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:51.243718 containerd[1822]: time="2025-04-30T03:51:51.243687626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:51.243826 containerd[1822]: time="2025-04-30T03:51:51.243728483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:51.269592 systemd[1]: Started cri-containerd-799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935.scope - libcontainer container 799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935. Apr 30 03:51:51.302693 containerd[1822]: time="2025-04-30T03:51:51.302659970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-n8bhw,Uid:2548eb5a-d1a4-481d-a809-0da6b01dba3d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935\"" Apr 30 03:51:51.318530 systemd-networkd[1612]: cali19897227976: Link UP Apr 30 03:51:51.318725 systemd-networkd[1612]: cali19897227976: Gained carrier Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.168 [INFO][5096] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.174 [INFO][5096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0 coredns-668d6bf9bc- kube-system be9545ce-5a42-48c9-a431-25d956e9ac4c 760 0 2025-04-30 03:51:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-1bdc449bef coredns-668d6bf9bc-58hpw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali19897227976 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.174 [INFO][5096] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.189 [INFO][5131] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" HandleID="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.194 [INFO][5131] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" HandleID="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003640d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-1bdc449bef", "pod":"coredns-668d6bf9bc-58hpw", "timestamp":"2025-04-30 03:51:51.189187333 +0000 UTC"}, Hostname:"ci-4081.3.3-a-1bdc449bef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.194 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.209 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.209 [INFO][5131] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-1bdc449bef' Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.295 [INFO][5131] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.300 [INFO][5131] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.304 [INFO][5131] ipam/ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.306 [INFO][5131] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.307 [INFO][5131] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.307 [INFO][5131] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.309 [INFO][5131] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111 Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.311 [INFO][5131] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.316 [INFO][5131] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.131/26] block=192.168.44.128/26 handle="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.316 [INFO][5131] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.131/26] handle="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.316 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:51.326849 containerd[1822]: 2025-04-30 03:51:51.316 [INFO][5131] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.131/26] IPv6=[] ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" HandleID="k8s-pod-network.dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.327478 containerd[1822]: 2025-04-30 03:51:51.317 [INFO][5096] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be9545ce-5a42-48c9-a431-25d956e9ac4c", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"", Pod:"coredns-668d6bf9bc-58hpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19897227976", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:51.327478 containerd[1822]: 2025-04-30 03:51:51.317 [INFO][5096] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.131/32] ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.327478 containerd[1822]: 2025-04-30 03:51:51.317 [INFO][5096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19897227976 ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.327478 containerd[1822]: 2025-04-30 03:51:51.318 [INFO][5096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.327478 containerd[1822]: 2025-04-30 03:51:51.318 [INFO][5096] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be9545ce-5a42-48c9-a431-25d956e9ac4c", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111", Pod:"coredns-668d6bf9bc-58hpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19897227976", MAC:"aa:f3:81:84:32:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:51.327478 containerd[1822]: 2025-04-30 03:51:51.325 [INFO][5096] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111" Namespace="kube-system" Pod="coredns-668d6bf9bc-58hpw" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:51:51.338442 containerd[1822]: time="2025-04-30T03:51:51.338350299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:51.338442 containerd[1822]: time="2025-04-30T03:51:51.338378054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:51.338442 containerd[1822]: time="2025-04-30T03:51:51.338385177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:51.338558 containerd[1822]: time="2025-04-30T03:51:51.338444036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:51.358517 systemd[1]: Started cri-containerd-dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111.scope - libcontainer container dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111. Apr 30 03:51:51.384206 containerd[1822]: time="2025-04-30T03:51:51.384177791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58hpw,Uid:be9545ce-5a42-48c9-a431-25d956e9ac4c,Namespace:kube-system,Attempt:1,} returns sandbox id \"dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111\"" Apr 30 03:51:51.385465 containerd[1822]: time="2025-04-30T03:51:51.385448655Z" level=info msg="CreateContainer within sandbox \"dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:51:51.404640 containerd[1822]: time="2025-04-30T03:51:51.404622104Z" level=info msg="CreateContainer within sandbox \"dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5d2cd364c44e3b006c27c488d83a6bd046c788afe1f74459f581d0f55dc3cfc\"" Apr 30 03:51:51.404867 containerd[1822]: time="2025-04-30T03:51:51.404825525Z" level=info msg="StartContainer for \"b5d2cd364c44e3b006c27c488d83a6bd046c788afe1f74459f581d0f55dc3cfc\"" Apr 30 03:51:51.418493 systemd[1]: Started cri-containerd-b5d2cd364c44e3b006c27c488d83a6bd046c788afe1f74459f581d0f55dc3cfc.scope - libcontainer container b5d2cd364c44e3b006c27c488d83a6bd046c788afe1f74459f581d0f55dc3cfc. Apr 30 03:51:51.430929 containerd[1822]: time="2025-04-30T03:51:51.430907006Z" level=info msg="StartContainer for \"b5d2cd364c44e3b006c27c488d83a6bd046c788afe1f74459f581d0f55dc3cfc\" returns successfully" Apr 30 03:51:51.671731 systemd-networkd[1612]: cali137f37f926b: Gained IPv6LL Apr 30 03:51:51.964225 containerd[1822]: time="2025-04-30T03:51:51.964171470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:51.964380 containerd[1822]: time="2025-04-30T03:51:51.964323165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:51:51.964671 containerd[1822]: time="2025-04-30T03:51:51.964623159Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:51.965679 containerd[1822]: time="2025-04-30T03:51:51.965638890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:51.966342 containerd[1822]: time="2025-04-30T03:51:51.966300163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.728743885s" Apr 30 03:51:51.966342 containerd[1822]: time="2025-04-30T03:51:51.966314153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:51:51.966792 containerd[1822]: time="2025-04-30T03:51:51.966780610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:51:51.967334 containerd[1822]: time="2025-04-30T03:51:51.967312161Z" level=info msg="CreateContainer within sandbox \"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:51:51.972342 containerd[1822]: time="2025-04-30T03:51:51.972297235Z" level=info msg="CreateContainer within sandbox \"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"605ecfe3901113a3a7fdd2700472504c391febc4a2f3af5e0600fbbcc20ba590\"" Apr 30 03:51:51.972573 containerd[1822]: time="2025-04-30T03:51:51.972527967Z" level=info msg="StartContainer for \"605ecfe3901113a3a7fdd2700472504c391febc4a2f3af5e0600fbbcc20ba590\"" Apr 30 03:51:51.989451 systemd[1]: Started cri-containerd-605ecfe3901113a3a7fdd2700472504c391febc4a2f3af5e0600fbbcc20ba590.scope - libcontainer container 605ecfe3901113a3a7fdd2700472504c391febc4a2f3af5e0600fbbcc20ba590. Apr 30 03:51:52.002586 containerd[1822]: time="2025-04-30T03:51:52.002564915Z" level=info msg="StartContainer for \"605ecfe3901113a3a7fdd2700472504c391febc4a2f3af5e0600fbbcc20ba590\" returns successfully" Apr 30 03:51:52.177291 kubelet[3080]: I0430 03:51:52.177203 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-58hpw" podStartSLOduration=30.177171047 podStartE2EDuration="30.177171047s" podCreationTimestamp="2025-04-30 03:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:51:52.176796316 +0000 UTC m=+34.190643914" watchObservedRunningTime="2025-04-30 03:51:52.177171047 +0000 UTC m=+34.191018632" Apr 30 03:51:52.406125 kubelet[3080]: I0430 03:51:52.406020 3080 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:51:52.503942 systemd-networkd[1612]: cali19897227976: Gained IPv6LL Apr 30 03:51:53.015478 systemd-networkd[1612]: calibb391bf8a30: Gained IPv6LL Apr 30 03:51:53.056326 kernel: bpftool[5456]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:51:53.056405 containerd[1822]: time="2025-04-30T03:51:53.056300806Z" level=info msg="StopPodSandbox for \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\"" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.095 [INFO][5473] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.096 [INFO][5473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" iface="eth0" netns="/var/run/netns/cni-9bb1205a-6b83-2aa6-301e-ff1ef3c56e7c" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.096 [INFO][5473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" iface="eth0" netns="/var/run/netns/cni-9bb1205a-6b83-2aa6-301e-ff1ef3c56e7c" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.096 [INFO][5473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" iface="eth0" netns="/var/run/netns/cni-9bb1205a-6b83-2aa6-301e-ff1ef3c56e7c" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.096 [INFO][5473] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.096 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.109 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.109 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.109 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.112 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.112 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.113 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:53.114962 containerd[1822]: 2025-04-30 03:51:53.114 [INFO][5473] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:51:53.115467 containerd[1822]: time="2025-04-30T03:51:53.115066814Z" level=info msg="TearDown network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\" successfully" Apr 30 03:51:53.115467 containerd[1822]: time="2025-04-30T03:51:53.115092462Z" level=info msg="StopPodSandbox for \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\" returns successfully" Apr 30 03:51:53.115553 containerd[1822]: time="2025-04-30T03:51:53.115539486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6zgx,Uid:68419564-d459-4b14-8200-20e2e4f891a1,Namespace:kube-system,Attempt:1,}" Apr 30 03:51:53.116621 systemd[1]: run-netns-cni\x2d9bb1205a\x2d6b83\x2d2aa6\x2d301e\x2dff1ef3c56e7c.mount: Deactivated successfully. Apr 30 03:51:53.168548 systemd-networkd[1612]: calie7a39854bef: Link UP Apr 30 03:51:53.168648 systemd-networkd[1612]: calie7a39854bef: Gained carrier Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.135 [INFO][5531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0 coredns-668d6bf9bc- kube-system 68419564-d459-4b14-8200-20e2e4f891a1 793 0 2025-04-30 03:51:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.3-a-1bdc449bef coredns-668d6bf9bc-j6zgx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie7a39854bef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.135 [INFO][5531] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.149 [INFO][5550] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" HandleID="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.153 [INFO][5550] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" HandleID="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f92e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.3-a-1bdc449bef", "pod":"coredns-668d6bf9bc-j6zgx", "timestamp":"2025-04-30 03:51:53.149052922 +0000 UTC"}, Hostname:"ci-4081.3.3-a-1bdc449bef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.153 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.153 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.153 [INFO][5550] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-1bdc449bef' Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.155 [INFO][5550] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.156 [INFO][5550] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.159 [INFO][5550] ipam/ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.160 [INFO][5550] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.161 [INFO][5550] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.161 [INFO][5550] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.162 [INFO][5550] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8 Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.164 [INFO][5550] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.166 [INFO][5550] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.132/26] block=192.168.44.128/26 handle="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.166 [INFO][5550] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.132/26] handle="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.166 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:53.174461 containerd[1822]: 2025-04-30 03:51:53.166 [INFO][5550] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.132/26] IPv6=[] ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" HandleID="k8s-pod-network.67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.175121 containerd[1822]: 2025-04-30 03:51:53.167 [INFO][5531] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"68419564-d459-4b14-8200-20e2e4f891a1", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"", Pod:"coredns-668d6bf9bc-j6zgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7a39854bef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:53.175121 containerd[1822]: 2025-04-30 03:51:53.167 [INFO][5531] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.132/32] ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.175121 containerd[1822]: 2025-04-30 03:51:53.167 [INFO][5531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7a39854bef ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.175121 containerd[1822]: 2025-04-30 03:51:53.168 [INFO][5531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.175121 containerd[1822]: 2025-04-30 03:51:53.168 [INFO][5531] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"68419564-d459-4b14-8200-20e2e4f891a1", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8", Pod:"coredns-668d6bf9bc-j6zgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7a39854bef", MAC:"4e:b3:2a:49:d2:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:53.175121 containerd[1822]: 2025-04-30 03:51:53.173 [INFO][5531] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-j6zgx" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:51:53.183589 containerd[1822]: time="2025-04-30T03:51:53.183519246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:53.183589 containerd[1822]: time="2025-04-30T03:51:53.183549723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:53.183589 containerd[1822]: time="2025-04-30T03:51:53.183556824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:53.183771 containerd[1822]: time="2025-04-30T03:51:53.183610039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:53.201475 systemd[1]: Started cri-containerd-67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8.scope - libcontainer container 67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8. Apr 30 03:51:53.215812 systemd-networkd[1612]: vxlan.calico: Link UP Apr 30 03:51:53.215815 systemd-networkd[1612]: vxlan.calico: Gained carrier Apr 30 03:51:53.224633 containerd[1822]: time="2025-04-30T03:51:53.224608273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6zgx,Uid:68419564-d459-4b14-8200-20e2e4f891a1,Namespace:kube-system,Attempt:1,} returns sandbox id \"67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8\"" Apr 30 03:51:53.226186 containerd[1822]: time="2025-04-30T03:51:53.226139930Z" level=info msg="CreateContainer within sandbox \"67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:51:53.231033 containerd[1822]: time="2025-04-30T03:51:53.230981332Z" level=info msg="CreateContainer within sandbox \"67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4bcbc7b3ce17f49acc37467409f7466018eb8e96371af70e744ad588e155923\"" Apr 30 03:51:53.231345 containerd[1822]: time="2025-04-30T03:51:53.231310190Z" level=info msg="StartContainer for \"a4bcbc7b3ce17f49acc37467409f7466018eb8e96371af70e744ad588e155923\"" Apr 30 03:51:53.258456 systemd[1]: Started cri-containerd-a4bcbc7b3ce17f49acc37467409f7466018eb8e96371af70e744ad588e155923.scope - libcontainer container a4bcbc7b3ce17f49acc37467409f7466018eb8e96371af70e744ad588e155923. Apr 30 03:51:53.272784 containerd[1822]: time="2025-04-30T03:51:53.272692975Z" level=info msg="StartContainer for \"a4bcbc7b3ce17f49acc37467409f7466018eb8e96371af70e744ad588e155923\" returns successfully" Apr 30 03:51:54.059092 containerd[1822]: time="2025-04-30T03:51:54.059058446Z" level=info msg="StopPodSandbox for \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\"" Apr 30 03:51:54.059536 containerd[1822]: time="2025-04-30T03:51:54.059067901Z" level=info msg="StopPodSandbox for \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\"" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5808] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" iface="eth0" netns="/var/run/netns/cni-c1e98ca4-6cc1-3c05-ba5f-b484c08aca23" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" iface="eth0" netns="/var/run/netns/cni-c1e98ca4-6cc1-3c05-ba5f-b484c08aca23" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" iface="eth0" netns="/var/run/netns/cni-c1e98ca4-6cc1-3c05-ba5f-b484c08aca23" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5808] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.096 [INFO][5837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.096 [INFO][5837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.096 [INFO][5837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.100 [WARNING][5837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.100 [INFO][5837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.101 [INFO][5837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:54.102503 containerd[1822]: 2025-04-30 03:51:54.101 [INFO][5808] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:51:54.102768 containerd[1822]: time="2025-04-30T03:51:54.102565461Z" level=info msg="TearDown network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\" successfully" Apr 30 03:51:54.102768 containerd[1822]: time="2025-04-30T03:51:54.102580791Z" level=info msg="StopPodSandbox for \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\" returns successfully" Apr 30 03:51:54.102903 containerd[1822]: time="2025-04-30T03:51:54.102889436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-cr4br,Uid:3653e43a-3062-4f92-85f9-7277b6be6efd,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5807] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" iface="eth0" netns="/var/run/netns/cni-1724d2a2-9d19-87fb-72bb-24c6dd555a05" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" iface="eth0" netns="/var/run/netns/cni-1724d2a2-9d19-87fb-72bb-24c6dd555a05" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" iface="eth0" netns="/var/run/netns/cni-1724d2a2-9d19-87fb-72bb-24c6dd555a05" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5807] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.086 [INFO][5807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.096 [INFO][5835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.096 [INFO][5835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.101 [INFO][5835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.105 [WARNING][5835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.105 [INFO][5835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.106 [INFO][5835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:54.108204 containerd[1822]: 2025-04-30 03:51:54.107 [INFO][5807] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:51:54.108514 containerd[1822]: time="2025-04-30T03:51:54.108285849Z" level=info msg="TearDown network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\" successfully" Apr 30 03:51:54.108514 containerd[1822]: time="2025-04-30T03:51:54.108301947Z" level=info msg="StopPodSandbox for \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\" returns successfully" Apr 30 03:51:54.108724 containerd[1822]: time="2025-04-30T03:51:54.108687435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f5c844c6-4n2h7,Uid:de4043d1-b1f6-436a-bcda-d9e4b8fb70cc,Namespace:calico-system,Attempt:1,}" Apr 30 03:51:54.119075 systemd[1]: run-netns-cni\x2dc1e98ca4\x2d6cc1\x2d3c05\x2dba5f\x2db484c08aca23.mount: Deactivated successfully. Apr 30 03:51:54.119129 systemd[1]: run-netns-cni\x2d1724d2a2\x2d9d19\x2d87fb\x2d72bb\x2d24c6dd555a05.mount: Deactivated successfully. Apr 30 03:51:54.161497 systemd-networkd[1612]: cali6f5a3a1576b: Link UP Apr 30 03:51:54.161608 systemd-networkd[1612]: cali6f5a3a1576b: Gained carrier Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.125 [INFO][5868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0 calico-apiserver-75f647cfb9- calico-apiserver 3653e43a-3062-4f92-85f9-7277b6be6efd 805 0 2025-04-30 03:51:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75f647cfb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.3-a-1bdc449bef calico-apiserver-75f647cfb9-cr4br eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f5a3a1576b [] []}} ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.125 [INFO][5868] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.139 [INFO][5913] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" HandleID="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.146 [INFO][5913] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" HandleID="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00069bae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.3-a-1bdc449bef", "pod":"calico-apiserver-75f647cfb9-cr4br", "timestamp":"2025-04-30 03:51:54.139816608 +0000 UTC"}, Hostname:"ci-4081.3.3-a-1bdc449bef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.146 [INFO][5913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.146 [INFO][5913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.146 [INFO][5913] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-1bdc449bef' Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.147 [INFO][5913] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.149 [INFO][5913] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.151 [INFO][5913] ipam/ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.152 [INFO][5913] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.154 [INFO][5913] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.154 [INFO][5913] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.155 [INFO][5913] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965 Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.156 [INFO][5913] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.159 [INFO][5913] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.133/26] block=192.168.44.128/26 handle="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.159 [INFO][5913] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.133/26] handle="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.159 [INFO][5913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:54.166648 containerd[1822]: 2025-04-30 03:51:54.159 [INFO][5913] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.133/26] IPv6=[] ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" HandleID="k8s-pod-network.13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.167043 containerd[1822]: 2025-04-30 03:51:54.160 [INFO][5868] cni-plugin/k8s.go 386: Populated endpoint ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3653e43a-3062-4f92-85f9-7277b6be6efd", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"", Pod:"calico-apiserver-75f647cfb9-cr4br", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f5a3a1576b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:54.167043 containerd[1822]: 2025-04-30 03:51:54.160 [INFO][5868] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.133/32] ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.167043 containerd[1822]: 2025-04-30 03:51:54.160 [INFO][5868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f5a3a1576b ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.167043 containerd[1822]: 2025-04-30 03:51:54.161 [INFO][5868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.167043 containerd[1822]: 2025-04-30 03:51:54.161 [INFO][5868] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3653e43a-3062-4f92-85f9-7277b6be6efd", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965", Pod:"calico-apiserver-75f647cfb9-cr4br", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f5a3a1576b", MAC:"1e:d3:bb:94:ea:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:54.167043 containerd[1822]: 2025-04-30 03:51:54.165 [INFO][5868] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965" Namespace="calico-apiserver" Pod="calico-apiserver-75f647cfb9-cr4br" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:51:54.175174 kubelet[3080]: I0430 03:51:54.175140 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j6zgx" podStartSLOduration=32.175128676 podStartE2EDuration="32.175128676s" podCreationTimestamp="2025-04-30 03:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:51:54.174769344 +0000 UTC m=+36.188616900" watchObservedRunningTime="2025-04-30 03:51:54.175128676 +0000 UTC m=+36.188976229" Apr 30 03:51:54.263693 systemd-networkd[1612]: cali4e002609d5d: Link UP Apr 30 03:51:54.263809 systemd-networkd[1612]: cali4e002609d5d: Gained carrier Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.129 [INFO][5881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0 calico-kube-controllers-64f5c844c6- calico-system de4043d1-b1f6-436a-bcda-d9e4b8fb70cc 804 0 2025-04-30 03:51:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64f5c844c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.3-a-1bdc449bef calico-kube-controllers-64f5c844c6-4n2h7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4e002609d5d [] []}} ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.129 [INFO][5881] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.143 [INFO][5926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" HandleID="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.148 [INFO][5926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" HandleID="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b3e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.3-a-1bdc449bef", "pod":"calico-kube-controllers-64f5c844c6-4n2h7", "timestamp":"2025-04-30 03:51:54.143891698 +0000 UTC"}, Hostname:"ci-4081.3.3-a-1bdc449bef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.148 [INFO][5926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.159 [INFO][5926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.159 [INFO][5926] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.3-a-1bdc449bef' Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.248 [INFO][5926] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.251 [INFO][5926] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.254 [INFO][5926] ipam/ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.255 [INFO][5926] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.256 [INFO][5926] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.256 [INFO][5926] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.257 [INFO][5926] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001 Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.259 [INFO][5926] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.262 [INFO][5926] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.134/26] block=192.168.44.128/26 handle="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.262 [INFO][5926] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.134/26] handle="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" host="ci-4081.3.3-a-1bdc449bef" Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.262 [INFO][5926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:51:54.269993 containerd[1822]: 2025-04-30 03:51:54.262 [INFO][5926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.134/26] IPv6=[] ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" HandleID="k8s-pod-network.98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.270439 containerd[1822]: 2025-04-30 03:51:54.262 [INFO][5881] cni-plugin/k8s.go 386: Populated endpoint ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0", GenerateName:"calico-kube-controllers-64f5c844c6-", Namespace:"calico-system", SelfLink:"", UID:"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f5c844c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"", Pod:"calico-kube-controllers-64f5c844c6-4n2h7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e002609d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:54.270439 containerd[1822]: 2025-04-30 03:51:54.262 [INFO][5881] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.134/32] ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.270439 containerd[1822]: 2025-04-30 03:51:54.263 [INFO][5881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e002609d5d ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.270439 containerd[1822]: 2025-04-30 03:51:54.263 [INFO][5881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.270439 containerd[1822]: 2025-04-30 03:51:54.263 [INFO][5881] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0", GenerateName:"calico-kube-controllers-64f5c844c6-", Namespace:"calico-system", SelfLink:"", UID:"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f5c844c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001", Pod:"calico-kube-controllers-64f5c844c6-4n2h7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e002609d5d", MAC:"a6:b9:7b:c9:75:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:51:54.270439 containerd[1822]: 2025-04-30 03:51:54.269 [INFO][5881] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001" Namespace="calico-system" Pod="calico-kube-controllers-64f5c844c6-4n2h7" WorkloadEndpoint="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:51:54.270642 containerd[1822]: time="2025-04-30T03:51:54.270605227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:54.270663 containerd[1822]: time="2025-04-30T03:51:54.270637940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:54.270663 containerd[1822]: time="2025-04-30T03:51:54.270645136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:54.270754 containerd[1822]: time="2025-04-30T03:51:54.270689325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:54.279531 containerd[1822]: time="2025-04-30T03:51:54.279494744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:51:54.279531 containerd[1822]: time="2025-04-30T03:51:54.279522507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:51:54.279633 containerd[1822]: time="2025-04-30T03:51:54.279533201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:54.279633 containerd[1822]: time="2025-04-30T03:51:54.279582021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:51:54.295379 systemd-networkd[1612]: vxlan.calico: Gained IPv6LL Apr 30 03:51:54.296420 systemd[1]: Started cri-containerd-13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965.scope - libcontainer container 13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965. Apr 30 03:51:54.301838 systemd[1]: Started cri-containerd-98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001.scope - libcontainer container 98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001. Apr 30 03:51:54.322223 containerd[1822]: time="2025-04-30T03:51:54.322149324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f647cfb9-cr4br,Uid:3653e43a-3062-4f92-85f9-7277b6be6efd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965\"" Apr 30 03:51:54.325449 containerd[1822]: time="2025-04-30T03:51:54.325430304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f5c844c6-4n2h7,Uid:de4043d1-b1f6-436a-bcda-d9e4b8fb70cc,Namespace:calico-system,Attempt:1,} returns sandbox id \"98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001\"" Apr 30 03:51:54.359388 systemd-networkd[1612]: calie7a39854bef: Gained IPv6LL Apr 30 03:51:54.551722 containerd[1822]: time="2025-04-30T03:51:54.551670869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:54.551944 containerd[1822]: time="2025-04-30T03:51:54.551884379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:51:54.552276 containerd[1822]: time="2025-04-30T03:51:54.552236758Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:54.553204 containerd[1822]: time="2025-04-30T03:51:54.553161746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:54.553650 containerd[1822]: time="2025-04-30T03:51:54.553609263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.586812798s" Apr 30 03:51:54.553650 containerd[1822]: time="2025-04-30T03:51:54.553624995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:51:54.554134 containerd[1822]: time="2025-04-30T03:51:54.554091522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:51:54.554637 containerd[1822]: time="2025-04-30T03:51:54.554622099Z" level=info msg="CreateContainer within sandbox \"799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:51:54.558620 containerd[1822]: time="2025-04-30T03:51:54.558568578Z" level=info msg="CreateContainer within sandbox \"799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8c50fc1764e0e1914d49121356a20aae1d3bb20578fb93c7e9d3f1f78d020821\"" Apr 30 03:51:54.558771 containerd[1822]: time="2025-04-30T03:51:54.558757991Z" level=info msg="StartContainer for \"8c50fc1764e0e1914d49121356a20aae1d3bb20578fb93c7e9d3f1f78d020821\"" Apr 30 03:51:54.584613 systemd[1]: Started cri-containerd-8c50fc1764e0e1914d49121356a20aae1d3bb20578fb93c7e9d3f1f78d020821.scope - libcontainer container 8c50fc1764e0e1914d49121356a20aae1d3bb20578fb93c7e9d3f1f78d020821. Apr 30 03:51:54.625483 containerd[1822]: time="2025-04-30T03:51:54.625429930Z" level=info msg="StartContainer for \"8c50fc1764e0e1914d49121356a20aae1d3bb20578fb93c7e9d3f1f78d020821\" returns successfully" Apr 30 03:51:55.181445 kubelet[3080]: I0430 03:51:55.181400 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75f647cfb9-n8bhw" podStartSLOduration=22.930864484 podStartE2EDuration="26.181382348s" podCreationTimestamp="2025-04-30 03:51:29 +0000 UTC" firstStartedPulling="2025-04-30 03:51:51.303496596 +0000 UTC m=+33.317344160" lastFinishedPulling="2025-04-30 03:51:54.554014467 +0000 UTC m=+36.567862024" observedRunningTime="2025-04-30 03:51:55.181087791 +0000 UTC m=+37.194935362" watchObservedRunningTime="2025-04-30 03:51:55.181382348 +0000 UTC m=+37.195229911" Apr 30 03:51:55.639629 systemd-networkd[1612]: cali6f5a3a1576b: Gained IPv6LL Apr 30 03:51:56.151686 systemd-networkd[1612]: cali4e002609d5d: Gained IPv6LL Apr 30 03:51:56.494052 containerd[1822]: time="2025-04-30T03:51:56.493993999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:56.494272 containerd[1822]: time="2025-04-30T03:51:56.494222000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:51:56.494625 containerd[1822]: time="2025-04-30T03:51:56.494584643Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:56.495640 containerd[1822]: time="2025-04-30T03:51:56.495623383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:56.496110 containerd[1822]: time="2025-04-30T03:51:56.496096097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.941982955s" Apr 30 03:51:56.496143 containerd[1822]: time="2025-04-30T03:51:56.496114852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:51:56.496589 containerd[1822]: time="2025-04-30T03:51:56.496575790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:51:56.497107 containerd[1822]: time="2025-04-30T03:51:56.497095384Z" level=info msg="CreateContainer within sandbox \"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:51:56.501750 containerd[1822]: time="2025-04-30T03:51:56.501732143Z" level=info msg="CreateContainer within sandbox \"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"01e735ac8ed9d66969ae2eeeeb3588c6673a55d14159c64bca6391f5341d6eb7\"" Apr 30 03:51:56.501977 containerd[1822]: time="2025-04-30T03:51:56.501964552Z" level=info msg="StartContainer for \"01e735ac8ed9d66969ae2eeeeb3588c6673a55d14159c64bca6391f5341d6eb7\"" Apr 30 03:51:56.535591 systemd[1]: Started cri-containerd-01e735ac8ed9d66969ae2eeeeb3588c6673a55d14159c64bca6391f5341d6eb7.scope - libcontainer container 01e735ac8ed9d66969ae2eeeeb3588c6673a55d14159c64bca6391f5341d6eb7. Apr 30 03:51:56.548342 containerd[1822]: time="2025-04-30T03:51:56.548319430Z" level=info msg="StartContainer for \"01e735ac8ed9d66969ae2eeeeb3588c6673a55d14159c64bca6391f5341d6eb7\" returns successfully" Apr 30 03:51:56.925547 containerd[1822]: time="2025-04-30T03:51:56.925482991Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:56.925755 containerd[1822]: time="2025-04-30T03:51:56.925733570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:51:56.926963 containerd[1822]: time="2025-04-30T03:51:56.926949972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 430.358464ms" Apr 30 03:51:56.926988 containerd[1822]: time="2025-04-30T03:51:56.926966106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:51:56.927621 containerd[1822]: time="2025-04-30T03:51:56.927564429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:51:56.928255 containerd[1822]: time="2025-04-30T03:51:56.928243377Z" level=info msg="CreateContainer within sandbox \"13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:51:56.932987 containerd[1822]: time="2025-04-30T03:51:56.932971983Z" level=info msg="CreateContainer within sandbox \"13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"434202acc8069c8b652f21f25338e7295ce2b4bfef1f6354d305d07189384f93\"" Apr 30 03:51:56.933230 containerd[1822]: time="2025-04-30T03:51:56.933219783Z" level=info msg="StartContainer for \"434202acc8069c8b652f21f25338e7295ce2b4bfef1f6354d305d07189384f93\"" Apr 30 03:51:56.969556 systemd[1]: Started cri-containerd-434202acc8069c8b652f21f25338e7295ce2b4bfef1f6354d305d07189384f93.scope - libcontainer container 434202acc8069c8b652f21f25338e7295ce2b4bfef1f6354d305d07189384f93. Apr 30 03:51:56.996696 containerd[1822]: time="2025-04-30T03:51:56.996671930Z" level=info msg="StartContainer for \"434202acc8069c8b652f21f25338e7295ce2b4bfef1f6354d305d07189384f93\" returns successfully" Apr 30 03:51:57.096110 kubelet[3080]: I0430 03:51:57.096064 3080 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:51:57.096110 kubelet[3080]: I0430 03:51:57.096081 3080 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:51:57.184448 kubelet[3080]: I0430 03:51:57.184411 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t9dtx" podStartSLOduration=21.925281361 podStartE2EDuration="28.184397988s" podCreationTimestamp="2025-04-30 03:51:29 +0000 UTC" firstStartedPulling="2025-04-30 03:51:50.237388599 +0000 UTC m=+32.251236153" lastFinishedPulling="2025-04-30 03:51:56.496505224 +0000 UTC m=+38.510352780" observedRunningTime="2025-04-30 03:51:57.184181829 +0000 UTC m=+39.198029391" watchObservedRunningTime="2025-04-30 03:51:57.184397988 +0000 UTC m=+39.198245543" Apr 30 03:51:57.189879 kubelet[3080]: I0430 03:51:57.189844 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75f647cfb9-cr4br" podStartSLOduration=25.585123211 podStartE2EDuration="28.189831929s" podCreationTimestamp="2025-04-30 03:51:29 +0000 UTC" firstStartedPulling="2025-04-30 03:51:54.32273838 +0000 UTC m=+36.336585934" lastFinishedPulling="2025-04-30 03:51:56.927447096 +0000 UTC m=+38.941294652" observedRunningTime="2025-04-30 03:51:57.189537791 +0000 UTC m=+39.203385365" watchObservedRunningTime="2025-04-30 03:51:57.189831929 +0000 UTC m=+39.203679485" Apr 30 03:51:57.614501 kubelet[3080]: I0430 03:51:57.614304 3080 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:51:59.368472 containerd[1822]: time="2025-04-30T03:51:59.368418307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:59.368710 containerd[1822]: time="2025-04-30T03:51:59.368544234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:51:59.368910 containerd[1822]: time="2025-04-30T03:51:59.368895116Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:59.369986 containerd[1822]: time="2025-04-30T03:51:59.369970433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:51:59.370484 containerd[1822]: time="2025-04-30T03:51:59.370441280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.442843819s" Apr 30 03:51:59.370484 containerd[1822]: time="2025-04-30T03:51:59.370456649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:51:59.373888 containerd[1822]: time="2025-04-30T03:51:59.373837369Z" level=info msg="CreateContainer within sandbox \"98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:51:59.378481 containerd[1822]: time="2025-04-30T03:51:59.378434182Z" level=info msg="CreateContainer within sandbox \"98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0b18ac54727c014bb62f091e1ae94760d1383029129bdb44c7805fa14ef3e2d2\"" Apr 30 03:51:59.378677 containerd[1822]: time="2025-04-30T03:51:59.378636779Z" level=info msg="StartContainer for \"0b18ac54727c014bb62f091e1ae94760d1383029129bdb44c7805fa14ef3e2d2\"" Apr 30 03:51:59.412452 systemd[1]: Started cri-containerd-0b18ac54727c014bb62f091e1ae94760d1383029129bdb44c7805fa14ef3e2d2.scope - libcontainer container 0b18ac54727c014bb62f091e1ae94760d1383029129bdb44c7805fa14ef3e2d2. Apr 30 03:51:59.443940 containerd[1822]: time="2025-04-30T03:51:59.443914976Z" level=info msg="StartContainer for \"0b18ac54727c014bb62f091e1ae94760d1383029129bdb44c7805fa14ef3e2d2\" returns successfully" Apr 30 03:52:00.211963 kubelet[3080]: I0430 03:52:00.211857 3080 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64f5c844c6-4n2h7" podStartSLOduration=26.166926325 podStartE2EDuration="31.211819118s" podCreationTimestamp="2025-04-30 03:51:29 +0000 UTC" firstStartedPulling="2025-04-30 03:51:54.325958523 +0000 UTC m=+36.339806076" lastFinishedPulling="2025-04-30 03:51:59.370851313 +0000 UTC m=+41.384698869" observedRunningTime="2025-04-30 03:52:00.210588246 +0000 UTC m=+42.224435879" watchObservedRunningTime="2025-04-30 03:52:00.211819118 +0000 UTC m=+42.225666722" Apr 30 03:52:18.050596 containerd[1822]: time="2025-04-30T03:52:18.050536858Z" level=info msg="StopPodSandbox for \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\"" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.071 [WARNING][6417] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be9545ce-5a42-48c9-a431-25d956e9ac4c", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111", Pod:"coredns-668d6bf9bc-58hpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19897227976", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.071 [INFO][6417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.071 [INFO][6417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" iface="eth0" netns="" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.071 [INFO][6417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.071 [INFO][6417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.083 [INFO][6432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.083 [INFO][6432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.083 [INFO][6432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.088 [WARNING][6432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.088 [INFO][6432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.089 [INFO][6432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.090866 containerd[1822]: 2025-04-30 03:52:18.090 [INFO][6417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.090866 containerd[1822]: time="2025-04-30T03:52:18.090854554Z" level=info msg="TearDown network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\" successfully" Apr 30 03:52:18.090866 containerd[1822]: time="2025-04-30T03:52:18.090871774Z" level=info msg="StopPodSandbox for \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\" returns successfully" Apr 30 03:52:18.091248 containerd[1822]: time="2025-04-30T03:52:18.091231766Z" level=info msg="RemovePodSandbox for \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\"" Apr 30 03:52:18.091271 containerd[1822]: time="2025-04-30T03:52:18.091253412Z" level=info msg="Forcibly stopping sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\"" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.112 [WARNING][6458] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be9545ce-5a42-48c9-a431-25d956e9ac4c", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"dac9eff1c97547ffcd5e6565caec15361b426172b3b3cb6818b6b5186979b111", Pod:"coredns-668d6bf9bc-58hpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19897227976", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.112 [INFO][6458] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.112 [INFO][6458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" iface="eth0" netns="" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.112 [INFO][6458] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.112 [INFO][6458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.124 [INFO][6470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.124 [INFO][6470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.124 [INFO][6470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.128 [WARNING][6470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.128 [INFO][6470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" HandleID="k8s-pod-network.98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--58hpw-eth0" Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.129 [INFO][6470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.131256 containerd[1822]: 2025-04-30 03:52:18.130 [INFO][6458] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79" Apr 30 03:52:18.131256 containerd[1822]: time="2025-04-30T03:52:18.131251134Z" level=info msg="TearDown network for sandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\" successfully" Apr 30 03:52:18.132702 containerd[1822]: time="2025-04-30T03:52:18.132656430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:52:18.132702 containerd[1822]: time="2025-04-30T03:52:18.132689321Z" level=info msg="RemovePodSandbox \"98ac746db99fd78077a40dbf7d67be9dc0f8104015e6edc0b64d597fdb762f79\" returns successfully" Apr 30 03:52:18.133041 containerd[1822]: time="2025-04-30T03:52:18.132999295Z" level=info msg="StopPodSandbox for \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\"" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.151 [WARNING][6499] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2548eb5a-d1a4-481d-a809-0da6b01dba3d", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935", Pod:"calico-apiserver-75f647cfb9-n8bhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb391bf8a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.151 [INFO][6499] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.151 [INFO][6499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" iface="eth0" netns="" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.151 [INFO][6499] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.151 [INFO][6499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.162 [INFO][6512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.162 [INFO][6512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.162 [INFO][6512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.166 [WARNING][6512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.166 [INFO][6512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.167 [INFO][6512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.169050 containerd[1822]: 2025-04-30 03:52:18.168 [INFO][6499] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.169447 containerd[1822]: time="2025-04-30T03:52:18.169071500Z" level=info msg="TearDown network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\" successfully" Apr 30 03:52:18.169447 containerd[1822]: time="2025-04-30T03:52:18.169090236Z" level=info msg="StopPodSandbox for \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\" returns successfully" Apr 30 03:52:18.169447 containerd[1822]: time="2025-04-30T03:52:18.169375603Z" level=info msg="RemovePodSandbox for \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\"" Apr 30 03:52:18.169447 containerd[1822]: time="2025-04-30T03:52:18.169392300Z" level=info msg="Forcibly stopping sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\"" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.190 [WARNING][6537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2548eb5a-d1a4-481d-a809-0da6b01dba3d", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"799c77f20b208d3a99711341a25a8aae34747ca55de4b5003e710add503e9935", Pod:"calico-apiserver-75f647cfb9-n8bhw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb391bf8a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.190 [INFO][6537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.190 [INFO][6537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" iface="eth0" netns="" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.190 [INFO][6537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.190 [INFO][6537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.202 [INFO][6551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.202 [INFO][6551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.202 [INFO][6551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.207 [WARNING][6551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.207 [INFO][6551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" HandleID="k8s-pod-network.fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--n8bhw-eth0" Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.208 [INFO][6551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.209682 containerd[1822]: 2025-04-30 03:52:18.208 [INFO][6537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15" Apr 30 03:52:18.209682 containerd[1822]: time="2025-04-30T03:52:18.209673100Z" level=info msg="TearDown network for sandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\" successfully" Apr 30 03:52:18.211164 containerd[1822]: time="2025-04-30T03:52:18.211122708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:52:18.211164 containerd[1822]: time="2025-04-30T03:52:18.211150533Z" level=info msg="RemovePodSandbox \"fb64890e26d4c01419045e00f17de38669cd9af7f8a8fc1264de5b55b18bdb15\" returns successfully" Apr 30 03:52:18.211448 containerd[1822]: time="2025-04-30T03:52:18.211396777Z" level=info msg="StopPodSandbox for \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\"" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.231 [WARNING][6578] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0", GenerateName:"calico-kube-controllers-64f5c844c6-", Namespace:"calico-system", SelfLink:"", UID:"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f5c844c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001", Pod:"calico-kube-controllers-64f5c844c6-4n2h7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e002609d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.231 [INFO][6578] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.231 [INFO][6578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" iface="eth0" netns="" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.231 [INFO][6578] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.231 [INFO][6578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.244 [INFO][6594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.244 [INFO][6594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.244 [INFO][6594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.250 [WARNING][6594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.250 [INFO][6594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.251 [INFO][6594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.253128 containerd[1822]: 2025-04-30 03:52:18.252 [INFO][6578] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.253561 containerd[1822]: time="2025-04-30T03:52:18.253147836Z" level=info msg="TearDown network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\" successfully" Apr 30 03:52:18.253561 containerd[1822]: time="2025-04-30T03:52:18.253166051Z" level=info msg="StopPodSandbox for \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\" returns successfully" Apr 30 03:52:18.253561 containerd[1822]: time="2025-04-30T03:52:18.253346418Z" level=info msg="RemovePodSandbox for \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\"" Apr 30 03:52:18.253561 containerd[1822]: time="2025-04-30T03:52:18.253363165Z" level=info msg="Forcibly stopping sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\"" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.276 [WARNING][6625] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0", GenerateName:"calico-kube-controllers-64f5c844c6-", Namespace:"calico-system", SelfLink:"", UID:"de4043d1-b1f6-436a-bcda-d9e4b8fb70cc", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f5c844c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"98fa6171184764a76d6537a0778c4ca138a9c6a16970e59e125ce872f63f9001", Pod:"calico-kube-controllers-64f5c844c6-4n2h7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4e002609d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.277 [INFO][6625] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.277 [INFO][6625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" iface="eth0" netns="" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.277 [INFO][6625] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.277 [INFO][6625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.290 [INFO][6641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.290 [INFO][6641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.290 [INFO][6641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.295 [WARNING][6641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.295 [INFO][6641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" HandleID="k8s-pod-network.b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--kube--controllers--64f5c844c6--4n2h7-eth0" Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.296 [INFO][6641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.298090 containerd[1822]: 2025-04-30 03:52:18.297 [INFO][6625] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901" Apr 30 03:52:18.298494 containerd[1822]: time="2025-04-30T03:52:18.298114894Z" level=info msg="TearDown network for sandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\" successfully" Apr 30 03:52:18.299566 containerd[1822]: time="2025-04-30T03:52:18.299552268Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:52:18.299594 containerd[1822]: time="2025-04-30T03:52:18.299581806Z" level=info msg="RemovePodSandbox \"b6cd1004648cfb98a15f27fd7bee4bc3793cfe48289b906cff0412560a8f1901\" returns successfully" Apr 30 03:52:18.299856 containerd[1822]: time="2025-04-30T03:52:18.299845362Z" level=info msg="StopPodSandbox for \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\"" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.318 [WARNING][6668] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3653e43a-3062-4f92-85f9-7277b6be6efd", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965", Pod:"calico-apiserver-75f647cfb9-cr4br", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f5a3a1576b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.318 [INFO][6668] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.318 [INFO][6668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" iface="eth0" netns="" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.318 [INFO][6668] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.318 [INFO][6668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.329 [INFO][6681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.329 [INFO][6681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.329 [INFO][6681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.333 [WARNING][6681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.333 [INFO][6681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.334 [INFO][6681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.335536 containerd[1822]: 2025-04-30 03:52:18.334 [INFO][6668] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.335536 containerd[1822]: time="2025-04-30T03:52:18.335521449Z" level=info msg="TearDown network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\" successfully" Apr 30 03:52:18.335536 containerd[1822]: time="2025-04-30T03:52:18.335537013Z" level=info msg="StopPodSandbox for \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\" returns successfully" Apr 30 03:52:18.335901 containerd[1822]: time="2025-04-30T03:52:18.335823288Z" level=info msg="RemovePodSandbox for \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\"" Apr 30 03:52:18.335901 containerd[1822]: time="2025-04-30T03:52:18.335842566Z" level=info msg="Forcibly stopping sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\"" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.355 [WARNING][6708] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0", GenerateName:"calico-apiserver-75f647cfb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3653e43a-3062-4f92-85f9-7277b6be6efd", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f647cfb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"13aa1dcd77065b6e06aa6acbbf300b8ae5acbd6cc482213fde643fa5d38e3965", Pod:"calico-apiserver-75f647cfb9-cr4br", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f5a3a1576b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.355 [INFO][6708] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.355 [INFO][6708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" iface="eth0" netns="" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.355 [INFO][6708] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.355 [INFO][6708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.366 [INFO][6721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.366 [INFO][6721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.366 [INFO][6721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.371 [WARNING][6721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.371 [INFO][6721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" HandleID="k8s-pod-network.a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Workload="ci--4081.3.3--a--1bdc449bef-k8s-calico--apiserver--75f647cfb9--cr4br-eth0" Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.372 [INFO][6721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.373719 containerd[1822]: 2025-04-30 03:52:18.373 [INFO][6708] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0" Apr 30 03:52:18.373719 containerd[1822]: time="2025-04-30T03:52:18.373705357Z" level=info msg="TearDown network for sandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\" successfully" Apr 30 03:52:18.375099 containerd[1822]: time="2025-04-30T03:52:18.375059579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:52:18.375099 containerd[1822]: time="2025-04-30T03:52:18.375084449Z" level=info msg="RemovePodSandbox \"a44d73df8fe7adbac187a200e95d621577d006119e11e13b64f46d4e9798e8d0\" returns successfully" Apr 30 03:52:18.375380 containerd[1822]: time="2025-04-30T03:52:18.375357994Z" level=info msg="StopPodSandbox for \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\"" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.393 [WARNING][6748] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"68419564-d459-4b14-8200-20e2e4f891a1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8", Pod:"coredns-668d6bf9bc-j6zgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7a39854bef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.393 [INFO][6748] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.393 [INFO][6748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" iface="eth0" netns="" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.393 [INFO][6748] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.393 [INFO][6748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.404 [INFO][6761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.404 [INFO][6761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.404 [INFO][6761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.408 [WARNING][6761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.408 [INFO][6761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.409 [INFO][6761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.410604 containerd[1822]: 2025-04-30 03:52:18.409 [INFO][6748] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.410959 containerd[1822]: time="2025-04-30T03:52:18.410625923Z" level=info msg="TearDown network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\" successfully" Apr 30 03:52:18.410959 containerd[1822]: time="2025-04-30T03:52:18.410641377Z" level=info msg="StopPodSandbox for \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\" returns successfully" Apr 30 03:52:18.410992 containerd[1822]: time="2025-04-30T03:52:18.410961424Z" level=info msg="RemovePodSandbox for \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\"" Apr 30 03:52:18.410992 containerd[1822]: time="2025-04-30T03:52:18.410976863Z" level=info msg="Forcibly stopping sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\"" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.429 [WARNING][6789] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"68419564-d459-4b14-8200-20e2e4f891a1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"67d27885fa5a23427aceffcad7ca4ad33d0735f24131eda96c2b6c53492230f8", Pod:"coredns-668d6bf9bc-j6zgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7a39854bef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.429 [INFO][6789] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.429 [INFO][6789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" iface="eth0" netns="" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.429 [INFO][6789] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.429 [INFO][6789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.440 [INFO][6803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.440 [INFO][6803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.440 [INFO][6803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.445 [WARNING][6803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.445 [INFO][6803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" HandleID="k8s-pod-network.1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Workload="ci--4081.3.3--a--1bdc449bef-k8s-coredns--668d6bf9bc--j6zgx-eth0" Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.448 [INFO][6803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.454531 containerd[1822]: 2025-04-30 03:52:18.451 [INFO][6789] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89" Apr 30 03:52:18.456000 containerd[1822]: time="2025-04-30T03:52:18.454586465Z" level=info msg="TearDown network for sandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\" successfully" Apr 30 03:52:18.458991 containerd[1822]: time="2025-04-30T03:52:18.458948832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:52:18.458991 containerd[1822]: time="2025-04-30T03:52:18.458979762Z" level=info msg="RemovePodSandbox \"1ad62bf597aa0de21b597b5f992ec64284c6bffd418fd46f570878b5f7ca7f89\" returns successfully" Apr 30 03:52:18.459272 containerd[1822]: time="2025-04-30T03:52:18.459260247Z" level=info msg="StopPodSandbox for \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\"" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.479 [WARNING][6831] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dca7275-6863-404d-9bdb-986dfca9c849", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95", Pod:"csi-node-driver-t9dtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali137f37f926b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.479 [INFO][6831] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.479 [INFO][6831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" iface="eth0" netns="" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.479 [INFO][6831] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.479 [INFO][6831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.491 [INFO][6845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.492 [INFO][6845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.492 [INFO][6845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.496 [WARNING][6845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.496 [INFO][6845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.497 [INFO][6845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.499469 containerd[1822]: 2025-04-30 03:52:18.498 [INFO][6831] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.499469 containerd[1822]: time="2025-04-30T03:52:18.499456608Z" level=info msg="TearDown network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\" successfully" Apr 30 03:52:18.499469 containerd[1822]: time="2025-04-30T03:52:18.499474037Z" level=info msg="StopPodSandbox for \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\" returns successfully" Apr 30 03:52:18.499911 containerd[1822]: time="2025-04-30T03:52:18.499743764Z" level=info msg="RemovePodSandbox for \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\"" Apr 30 03:52:18.499911 containerd[1822]: time="2025-04-30T03:52:18.499760646Z" level=info msg="Forcibly stopping sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\"" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.522 [WARNING][6873] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dca7275-6863-404d-9bdb-986dfca9c849", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 51, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.3-a-1bdc449bef", ContainerID:"0bbb7de5c31a5f7318d0cce135d2236de4bc8dd66eb74c13533292f1d5c92b95", Pod:"csi-node-driver-t9dtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali137f37f926b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.522 [INFO][6873] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.522 [INFO][6873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" iface="eth0" netns="" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.522 [INFO][6873] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.523 [INFO][6873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.536 [INFO][6887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.536 [INFO][6887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.536 [INFO][6887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.541 [WARNING][6887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.541 [INFO][6887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" HandleID="k8s-pod-network.ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Workload="ci--4081.3.3--a--1bdc449bef-k8s-csi--node--driver--t9dtx-eth0" Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.542 [INFO][6887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:52:18.544434 containerd[1822]: 2025-04-30 03:52:18.543 [INFO][6873] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da" Apr 30 03:52:18.544836 containerd[1822]: time="2025-04-30T03:52:18.544452434Z" level=info msg="TearDown network for sandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\" successfully" Apr 30 03:52:18.545886 containerd[1822]: time="2025-04-30T03:52:18.545873448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:52:18.545913 containerd[1822]: time="2025-04-30T03:52:18.545901697Z" level=info msg="RemovePodSandbox \"ddf98d690b5fe96080f2f1b0d2dbdd3ed5d6a65fe2f44e6a331814fef71614da\" returns successfully" Apr 30 03:56:13.239738 update_engine[1809]: I20250430 03:56:13.239591 1809 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 03:56:13.239738 update_engine[1809]: I20250430 03:56:13.239696 1809 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.240069 1809 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241377 1809 omaha_request_params.cc:62] Current group set to lts Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241616 1809 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241647 1809 update_attempter.cc:643] Scheduling an action processor start. Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241684 1809 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241756 1809 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241925 1809 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241953 1809 omaha_request_action.cc:272] Request: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: Apr 30 03:56:13.243890 update_engine[1809]: I20250430 03:56:13.241971 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:56:13.244610 locksmithd[1857]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 03:56:13.244764 update_engine[1809]: I20250430 03:56:13.244722 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:56:13.244935 update_engine[1809]: I20250430 03:56:13.244894 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:56:13.245887 update_engine[1809]: E20250430 03:56:13.245844 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:56:13.245887 update_engine[1809]: I20250430 03:56:13.245875 1809 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 03:56:23.189534 update_engine[1809]: I20250430 03:56:23.189373 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:56:23.190531 update_engine[1809]: I20250430 03:56:23.190001 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:56:23.190654 update_engine[1809]: I20250430 03:56:23.190550 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:56:23.191365 update_engine[1809]: E20250430 03:56:23.191226 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:56:23.191554 update_engine[1809]: I20250430 03:56:23.191417 1809 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 03:56:33.190006 update_engine[1809]: I20250430 03:56:33.189831 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:56:33.191057 update_engine[1809]: I20250430 03:56:33.190433 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:56:33.191057 update_engine[1809]: I20250430 03:56:33.190958 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:56:33.191992 update_engine[1809]: E20250430 03:56:33.191874 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:56:33.192191 update_engine[1809]: I20250430 03:56:33.192016 1809 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 03:56:43.189721 update_engine[1809]: I20250430 03:56:43.189455 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:56:43.190729 update_engine[1809]: I20250430 03:56:43.190006 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:56:43.190729 update_engine[1809]: I20250430 03:56:43.190553 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:56:43.191415 update_engine[1809]: E20250430 03:56:43.191284 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:56:43.191633 update_engine[1809]: I20250430 03:56:43.191459 1809 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:56:43.191633 update_engine[1809]: I20250430 03:56:43.191495 1809 omaha_request_action.cc:617] Omaha request response: Apr 30 03:56:43.191845 update_engine[1809]: E20250430 03:56:43.191656 1809 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 03:56:43.191845 update_engine[1809]: I20250430 03:56:43.191704 1809 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 03:56:43.191845 update_engine[1809]: I20250430 03:56:43.191722 1809 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:56:43.191845 update_engine[1809]: I20250430 03:56:43.191736 1809 update_attempter.cc:306] Processing Done. Apr 30 03:56:43.191845 update_engine[1809]: E20250430 03:56:43.191767 1809 update_attempter.cc:619] Update failed. Apr 30 03:56:43.191845 update_engine[1809]: I20250430 03:56:43.191784 1809 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 03:56:43.191845 update_engine[1809]: I20250430 03:56:43.191798 1809 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 03:56:43.191845 update_engine[1809]: I20250430 03:56:43.191814 1809 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 03:56:43.192634 update_engine[1809]: I20250430 03:56:43.191967 1809 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 03:56:43.192634 update_engine[1809]: I20250430 03:56:43.192030 1809 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 03:56:43.192634 update_engine[1809]: I20250430 03:56:43.192050 1809 omaha_request_action.cc:272] Request: Apr 30 03:56:43.192634 update_engine[1809]: Apr 30 03:56:43.192634 update_engine[1809]: Apr 30 03:56:43.192634 update_engine[1809]: Apr 30 03:56:43.192634 update_engine[1809]: Apr 30 03:56:43.192634 update_engine[1809]: Apr 30 03:56:43.192634 update_engine[1809]: Apr 30 03:56:43.192634 update_engine[1809]: I20250430 03:56:43.192066 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 03:56:43.192634 update_engine[1809]: I20250430 03:56:43.192484 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 03:56:43.193594 update_engine[1809]: I20250430 03:56:43.192893 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 03:56:43.193707 locksmithd[1857]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 03:56:43.194391 update_engine[1809]: E20250430 03:56:43.193616 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 03:56:43.194391 update_engine[1809]: I20250430 03:56:43.193746 1809 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 03:56:43.194391 update_engine[1809]: I20250430 03:56:43.193774 1809 omaha_request_action.cc:617] Omaha request response: Apr 30 03:56:43.194391 update_engine[1809]: I20250430 03:56:43.193792 1809 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:56:43.194391 update_engine[1809]: I20250430 03:56:43.193807 1809 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 03:56:43.194391 update_engine[1809]: I20250430 03:56:43.193822 1809 update_attempter.cc:306] Processing Done. Apr 30 03:56:43.194391 update_engine[1809]: I20250430 03:56:43.193840 1809 update_attempter.cc:310] Error event sent. Apr 30 03:56:43.194391 update_engine[1809]: I20250430 03:56:43.193863 1809 update_check_scheduler.cc:74] Next update check in 41m17s Apr 30 03:56:43.195146 locksmithd[1857]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 03:57:02.609677 systemd[1]: Started sshd@9-147.75.90.203:22-139.178.68.195:58440.service - OpenSSH per-connection server daemon (139.178.68.195:58440). Apr 30 03:57:02.649397 sshd[7589]: Accepted publickey for core from 139.178.68.195 port 58440 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:02.650301 sshd[7589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:02.653725 systemd-logind[1804]: New session 12 of user core. Apr 30 03:57:02.672514 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:57:02.764945 sshd[7589]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:02.766648 systemd[1]: sshd@9-147.75.90.203:22-139.178.68.195:58440.service: Deactivated successfully. Apr 30 03:57:02.767687 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:57:02.768514 systemd-logind[1804]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:57:02.769183 systemd-logind[1804]: Removed session 12. Apr 30 03:57:07.780750 systemd[1]: Started sshd@10-147.75.90.203:22-139.178.68.195:51632.service - OpenSSH per-connection server daemon (139.178.68.195:51632). Apr 30 03:57:07.825836 sshd[7620]: Accepted publickey for core from 139.178.68.195 port 51632 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:07.827125 sshd[7620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:07.831088 systemd-logind[1804]: New session 13 of user core. Apr 30 03:57:07.843518 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:57:07.931845 sshd[7620]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:07.933551 systemd[1]: sshd@10-147.75.90.203:22-139.178.68.195:51632.service: Deactivated successfully. Apr 30 03:57:07.934555 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:57:07.935288 systemd-logind[1804]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:57:07.935965 systemd-logind[1804]: Removed session 13. Apr 30 03:57:12.955647 systemd[1]: Started sshd@11-147.75.90.203:22-139.178.68.195:51644.service - OpenSSH per-connection server daemon (139.178.68.195:51644). Apr 30 03:57:12.983453 sshd[7648]: Accepted publickey for core from 139.178.68.195 port 51644 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:12.984265 sshd[7648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:12.986901 systemd-logind[1804]: New session 14 of user core. Apr 30 03:57:12.999609 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:57:13.086105 sshd[7648]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:13.104625 systemd[1]: sshd@11-147.75.90.203:22-139.178.68.195:51644.service: Deactivated successfully. Apr 30 03:57:13.108720 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:57:13.112188 systemd-logind[1804]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:57:13.127147 systemd[1]: Started sshd@12-147.75.90.203:22-139.178.68.195:51656.service - OpenSSH per-connection server daemon (139.178.68.195:51656). Apr 30 03:57:13.129988 systemd-logind[1804]: Removed session 14. Apr 30 03:57:13.191007 sshd[7674]: Accepted publickey for core from 139.178.68.195 port 51656 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:13.191881 sshd[7674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:13.194927 systemd-logind[1804]: New session 15 of user core. Apr 30 03:57:13.214587 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:57:13.375456 sshd[7674]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:13.392290 systemd[1]: sshd@12-147.75.90.203:22-139.178.68.195:51656.service: Deactivated successfully. Apr 30 03:57:13.393291 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:57:13.394124 systemd-logind[1804]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:57:13.394933 systemd[1]: Started sshd@13-147.75.90.203:22-139.178.68.195:51668.service - OpenSSH per-connection server daemon (139.178.68.195:51668). Apr 30 03:57:13.395510 systemd-logind[1804]: Removed session 15. Apr 30 03:57:13.431134 sshd[7699]: Accepted publickey for core from 139.178.68.195 port 51668 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:13.432062 sshd[7699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:13.435181 systemd-logind[1804]: New session 16 of user core. Apr 30 03:57:13.459610 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:57:13.600415 sshd[7699]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:13.602552 systemd[1]: sshd@13-147.75.90.203:22-139.178.68.195:51668.service: Deactivated successfully. Apr 30 03:57:13.603830 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:57:13.604871 systemd-logind[1804]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:57:13.605850 systemd-logind[1804]: Removed session 16. Apr 30 03:57:18.619435 systemd[1]: Started sshd@14-147.75.90.203:22-139.178.68.195:47304.service - OpenSSH per-connection server daemon (139.178.68.195:47304). Apr 30 03:57:18.670942 sshd[7756]: Accepted publickey for core from 139.178.68.195 port 47304 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:18.674625 sshd[7756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:18.686049 systemd-logind[1804]: New session 17 of user core. Apr 30 03:57:18.712812 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:57:18.875196 sshd[7756]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:18.878086 systemd[1]: sshd@14-147.75.90.203:22-139.178.68.195:47304.service: Deactivated successfully. Apr 30 03:57:18.879908 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:57:18.881291 systemd-logind[1804]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:57:18.882432 systemd-logind[1804]: Removed session 17. Apr 30 03:57:23.903634 systemd[1]: Started sshd@15-147.75.90.203:22-139.178.68.195:47312.service - OpenSSH per-connection server daemon (139.178.68.195:47312). Apr 30 03:57:23.942776 sshd[7789]: Accepted publickey for core from 139.178.68.195 port 47312 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:23.943793 sshd[7789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:23.947096 systemd-logind[1804]: New session 18 of user core. Apr 30 03:57:23.965514 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:57:24.053285 sshd[7789]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:24.054999 systemd[1]: sshd@15-147.75.90.203:22-139.178.68.195:47312.service: Deactivated successfully. Apr 30 03:57:24.055979 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:57:24.056789 systemd-logind[1804]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:57:24.057281 systemd-logind[1804]: Removed session 18. Apr 30 03:57:29.084641 systemd[1]: Started sshd@16-147.75.90.203:22-139.178.68.195:51234.service - OpenSSH per-connection server daemon (139.178.68.195:51234). Apr 30 03:57:29.110597 sshd[7844]: Accepted publickey for core from 139.178.68.195 port 51234 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:29.111245 sshd[7844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:29.113802 systemd-logind[1804]: New session 19 of user core. Apr 30 03:57:29.130796 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:57:29.225076 sshd[7844]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:29.226729 systemd[1]: sshd@16-147.75.90.203:22-139.178.68.195:51234.service: Deactivated successfully. Apr 30 03:57:29.227707 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:57:29.228505 systemd-logind[1804]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:57:29.229163 systemd-logind[1804]: Removed session 19. Apr 30 03:57:34.252641 systemd[1]: Started sshd@17-147.75.90.203:22-139.178.68.195:51250.service - OpenSSH per-connection server daemon (139.178.68.195:51250). Apr 30 03:57:34.283247 sshd[7892]: Accepted publickey for core from 139.178.68.195 port 51250 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:34.283949 sshd[7892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:34.286418 systemd-logind[1804]: New session 20 of user core. Apr 30 03:57:34.295606 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:57:34.381108 sshd[7892]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:34.395036 systemd[1]: sshd@17-147.75.90.203:22-139.178.68.195:51250.service: Deactivated successfully. Apr 30 03:57:34.395872 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:57:34.396649 systemd-logind[1804]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:57:34.397288 systemd[1]: Started sshd@18-147.75.90.203:22-139.178.68.195:51254.service - OpenSSH per-connection server daemon (139.178.68.195:51254). Apr 30 03:57:34.397854 systemd-logind[1804]: Removed session 20. Apr 30 03:57:34.440700 sshd[7918]: Accepted publickey for core from 139.178.68.195 port 51254 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:34.444599 sshd[7918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:34.455886 systemd-logind[1804]: New session 21 of user core. Apr 30 03:57:34.472713 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:57:34.619266 sshd[7918]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:34.635990 systemd[1]: sshd@18-147.75.90.203:22-139.178.68.195:51254.service: Deactivated successfully. Apr 30 03:57:34.636816 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:57:34.637584 systemd-logind[1804]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:57:34.638246 systemd[1]: Started sshd@19-147.75.90.203:22-139.178.68.195:51258.service - OpenSSH per-connection server daemon (139.178.68.195:51258). Apr 30 03:57:34.638855 systemd-logind[1804]: Removed session 21. Apr 30 03:57:34.673769 sshd[7942]: Accepted publickey for core from 139.178.68.195 port 51258 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:34.674543 sshd[7942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:34.677579 systemd-logind[1804]: New session 22 of user core. Apr 30 03:57:34.689629 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:57:35.602179 sshd[7942]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:35.616930 systemd[1]: sshd@19-147.75.90.203:22-139.178.68.195:51258.service: Deactivated successfully. Apr 30 03:57:35.618229 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:57:35.619234 systemd-logind[1804]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:57:35.620267 systemd[1]: Started sshd@20-147.75.90.203:22-139.178.68.195:37608.service - OpenSSH per-connection server daemon (139.178.68.195:37608). Apr 30 03:57:35.621010 systemd-logind[1804]: Removed session 22. Apr 30 03:57:35.659675 sshd[7974]: Accepted publickey for core from 139.178.68.195 port 37608 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:35.660495 sshd[7974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:35.663293 systemd-logind[1804]: New session 23 of user core. Apr 30 03:57:35.680590 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:57:35.884634 sshd[7974]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:35.900087 systemd[1]: sshd@20-147.75.90.203:22-139.178.68.195:37608.service: Deactivated successfully. Apr 30 03:57:35.900929 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:57:35.901627 systemd-logind[1804]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:57:35.902329 systemd[1]: Started sshd@21-147.75.90.203:22-139.178.68.195:37614.service - OpenSSH per-connection server daemon (139.178.68.195:37614). Apr 30 03:57:35.902819 systemd-logind[1804]: Removed session 23. Apr 30 03:57:35.957961 sshd[8004]: Accepted publickey for core from 139.178.68.195 port 37614 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:35.959480 sshd[8004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:35.963896 systemd-logind[1804]: New session 24 of user core. Apr 30 03:57:35.981774 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:57:36.130295 sshd[8004]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:36.131914 systemd[1]: sshd@21-147.75.90.203:22-139.178.68.195:37614.service: Deactivated successfully. Apr 30 03:57:36.132863 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:57:36.133623 systemd-logind[1804]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:57:36.134202 systemd-logind[1804]: Removed session 24. Apr 30 03:57:41.159526 systemd[1]: Started sshd@22-147.75.90.203:22-139.178.68.195:37622.service - OpenSSH per-connection server daemon (139.178.68.195:37622). Apr 30 03:57:41.185401 sshd[8033]: Accepted publickey for core from 139.178.68.195 port 37622 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:41.186057 sshd[8033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:41.188634 systemd-logind[1804]: New session 25 of user core. Apr 30 03:57:41.189152 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:57:41.272362 sshd[8033]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:41.274083 systemd[1]: sshd@22-147.75.90.203:22-139.178.68.195:37622.service: Deactivated successfully. Apr 30 03:57:41.275046 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:57:41.275842 systemd-logind[1804]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:57:41.276558 systemd-logind[1804]: Removed session 25. Apr 30 03:57:46.298250 systemd[1]: Started sshd@23-147.75.90.203:22-139.178.68.195:42856.service - OpenSSH per-connection server daemon (139.178.68.195:42856). Apr 30 03:57:46.332266 sshd[8056]: Accepted publickey for core from 139.178.68.195 port 42856 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:46.333140 sshd[8056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:46.335947 systemd-logind[1804]: New session 26 of user core. Apr 30 03:57:46.359571 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:57:46.449190 sshd[8056]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:46.450928 systemd[1]: sshd@23-147.75.90.203:22-139.178.68.195:42856.service: Deactivated successfully. Apr 30 03:57:46.452012 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:57:46.452770 systemd-logind[1804]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:57:46.453303 systemd-logind[1804]: Removed session 26. Apr 30 03:57:51.489712 systemd[1]: Started sshd@24-147.75.90.203:22-139.178.68.195:42870.service - OpenSSH per-connection server daemon (139.178.68.195:42870). Apr 30 03:57:51.516250 sshd[8082]: Accepted publickey for core from 139.178.68.195 port 42870 ssh2: RSA SHA256:54y5TlCU+d2oEEY9cJL1PbqF1TCGQjogRj2Z7kLT1CY Apr 30 03:57:51.516893 sshd[8082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:57:51.519558 systemd-logind[1804]: New session 27 of user core. Apr 30 03:57:51.520106 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:57:51.610506 sshd[8082]: pam_unix(sshd:session): session closed for user core Apr 30 03:57:51.612449 systemd[1]: sshd@24-147.75.90.203:22-139.178.68.195:42870.service: Deactivated successfully. Apr 30 03:57:51.613575 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:57:51.614492 systemd-logind[1804]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:57:51.615284 systemd-logind[1804]: Removed session 27.