Sep 5 14:29:28.994855 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Sep 5 14:29:28.994868 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 5 14:29:28.994875 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 14:29:28.994880 kernel: BIOS-provided physical RAM map: Sep 5 14:29:28.994884 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 5 14:29:28.994888 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 5 14:29:28.994893 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 5 14:29:28.994897 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 5 14:29:28.994901 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 5 14:29:28.994905 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b27fff] usable Sep 5 14:29:28.994909 kernel: BIOS-e820: [mem 0x0000000081b28000-0x0000000081b28fff] ACPI NVS Sep 5 14:29:28.994914 kernel: BIOS-e820: [mem 0x0000000081b29000-0x0000000081b29fff] reserved Sep 5 14:29:28.994919 kernel: BIOS-e820: [mem 0x0000000081b2a000-0x000000008afccfff] usable Sep 5 14:29:28.994923 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 5 14:29:28.994928 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 5 14:29:28.994933 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 5 14:29:28.994938 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 5 14:29:28.994943 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 5 14:29:28.994948 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 5 14:29:28.994952 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 14:29:28.994957 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 5 14:29:28.994962 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 5 14:29:28.994966 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 5 14:29:28.994971 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 5 14:29:28.994976 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 5 14:29:28.994980 kernel: NX (Execute Disable) protection: active Sep 5 14:29:28.994985 kernel: APIC: Static calls initialized Sep 5 14:29:28.994990 kernel: SMBIOS 3.2.1 present. Sep 5 14:29:28.994996 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 5 14:29:28.995001 kernel: tsc: Detected 3400.000 MHz processor Sep 5 14:29:28.995005 kernel: tsc: Detected 3399.906 MHz TSC Sep 5 14:29:28.995010 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 14:29:28.995015 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 14:29:28.995020 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 5 14:29:28.995025 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 5 14:29:28.995030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 14:29:28.995035 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 5 14:29:28.995040 kernel: Using GB pages for direct mapping Sep 5 14:29:28.995045 kernel: ACPI: Early table checksum verification disabled Sep 5 14:29:28.995050 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 5 14:29:28.995057 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 5 14:29:28.995062 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 5 14:29:28.995067 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 5 14:29:28.995072 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 5 14:29:28.995078 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 5 14:29:28.995083 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 5 14:29:28.995088 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 5 14:29:28.995093 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 5 14:29:28.995098 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 5 14:29:28.995103 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 5 14:29:28.995108 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 5 14:29:28.995114 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 5 14:29:28.995120 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995125 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 5 14:29:28.995130 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 5 14:29:28.995135 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995140 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995145 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 5 14:29:28.995150 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 5 14:29:28.995155 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995161 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995166 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 5 14:29:28.995171 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 5 14:29:28.995176 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 5 14:29:28.995181 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 5 14:29:28.995186 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 5 14:29:28.995191 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 5 14:29:28.995196 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 5 14:29:28.995202 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 5 14:29:28.995207 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 5 14:29:28.995212 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 5 14:29:28.995217 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 5 14:29:28.995222 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 5 14:29:28.995227 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 5 14:29:28.995232 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 5 14:29:28.995237 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 5 14:29:28.995242 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 5 14:29:28.995248 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 5 14:29:28.995253 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 5 14:29:28.995258 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 5 14:29:28.995263 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 5 14:29:28.995268 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 5 14:29:28.995273 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 5 14:29:28.995278 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 5 14:29:28.995286 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 5 14:29:28.995291 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 5 14:29:28.995296 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 5 14:29:28.995302 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 5 14:29:28.995307 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 5 14:29:28.995312 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 5 14:29:28.995317 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 5 14:29:28.995322 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 5 14:29:28.995327 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 5 14:29:28.995332 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 5 14:29:28.995337 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 5 14:29:28.995342 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 5 14:29:28.995348 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 5 14:29:28.995353 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 5 14:29:28.995358 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 5 14:29:28.995363 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 5 14:29:28.995368 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 5 14:29:28.995373 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 5 14:29:28.995378 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 5 14:29:28.995383 kernel: No NUMA configuration found Sep 5 14:29:28.995388 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 5 14:29:28.995394 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 5 14:29:28.995399 kernel: Zone ranges: Sep 5 14:29:28.995404 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 14:29:28.995409 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 5 14:29:28.995414 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 5 14:29:28.995419 kernel: Movable zone start for each node Sep 5 14:29:28.995425 kernel: Early memory node ranges Sep 5 14:29:28.995430 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 5 14:29:28.995435 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 5 14:29:28.995440 kernel: node 0: [mem 0x0000000040400000-0x0000000081b27fff] Sep 5 14:29:28.995446 kernel: node 0: [mem 0x0000000081b2a000-0x000000008afccfff] Sep 5 14:29:28.995451 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 5 14:29:28.995456 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 5 14:29:28.995465 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 5 14:29:28.995470 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 5 14:29:28.995476 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 14:29:28.995481 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 5 14:29:28.995488 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 5 14:29:28.995493 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 5 14:29:28.995498 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 5 14:29:28.995504 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 5 14:29:28.995509 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 5 14:29:28.995515 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 5 14:29:28.995520 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 5 14:29:28.995526 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 5 14:29:28.995531 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 5 14:29:28.995537 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 5 14:29:28.995543 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 5 14:29:28.995548 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 5 14:29:28.995553 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 5 14:29:28.995559 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 5 14:29:28.995564 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 5 14:29:28.995569 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 5 14:29:28.995575 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 5 14:29:28.995580 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 5 14:29:28.995586 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 5 14:29:28.995592 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 5 14:29:28.995597 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 5 14:29:28.995602 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 5 14:29:28.995608 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 5 14:29:28.995613 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 5 14:29:28.995618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 14:29:28.995624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 14:29:28.995629 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 14:29:28.995635 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 14:29:28.995641 kernel: TSC deadline timer available Sep 5 14:29:28.995646 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 5 14:29:28.995652 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 5 14:29:28.995657 kernel: Booting paravirtualized kernel on bare hardware Sep 5 14:29:28.995663 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 14:29:28.995668 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 5 14:29:28.995674 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Sep 5 14:29:28.995679 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Sep 5 14:29:28.995684 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 5 14:29:28.995691 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 14:29:28.995697 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 14:29:28.995702 kernel: random: crng init done Sep 5 14:29:28.995707 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 5 14:29:28.995713 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 5 14:29:28.995718 kernel: Fallback order for Node 0: 0 Sep 5 14:29:28.995724 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 5 14:29:28.995729 kernel: Policy zone: Normal Sep 5 14:29:28.995735 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 14:29:28.995741 kernel: software IO TLB: area num 16. Sep 5 14:29:28.995746 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 732412K reserved, 0K cma-reserved) Sep 5 14:29:28.995752 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 5 14:29:28.995757 kernel: ftrace: allocating 37748 entries in 148 pages Sep 5 14:29:28.995763 kernel: ftrace: allocated 148 pages with 3 groups Sep 5 14:29:28.995768 kernel: Dynamic Preempt: voluntary Sep 5 14:29:28.995774 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 14:29:28.995779 kernel: rcu: RCU event tracing is enabled. Sep 5 14:29:28.995786 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 5 14:29:28.995792 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 14:29:28.995797 kernel: Rude variant of Tasks RCU enabled. Sep 5 14:29:28.995802 kernel: Tracing variant of Tasks RCU enabled. Sep 5 14:29:28.995808 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 14:29:28.995813 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 5 14:29:28.995819 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 5 14:29:28.995824 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 14:29:28.995829 kernel: Console: colour dummy device 80x25 Sep 5 14:29:28.995836 kernel: printk: console [tty0] enabled Sep 5 14:29:28.995841 kernel: printk: console [ttyS1] enabled Sep 5 14:29:28.995846 kernel: ACPI: Core revision 20230628 Sep 5 14:29:28.995852 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 5 14:29:28.995857 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 14:29:28.995863 kernel: DMAR: Host address width 39 Sep 5 14:29:28.995868 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 5 14:29:28.995873 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 5 14:29:28.995879 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 5 14:29:28.995885 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 5 14:29:28.995890 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 5 14:29:28.995896 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 5 14:29:28.995901 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 5 14:29:28.995907 kernel: x2apic enabled Sep 5 14:29:28.995912 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 5 14:29:28.995918 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 5 14:29:28.995923 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 5 14:29:28.995929 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 5 14:29:28.995935 kernel: process: using mwait in idle threads Sep 5 14:29:28.995940 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 5 14:29:28.995946 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 5 14:29:28.995951 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 14:29:28.995956 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 5 14:29:28.995962 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 5 14:29:28.995967 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 5 14:29:28.995972 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 5 14:29:28.995978 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 5 14:29:28.995983 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 5 14:29:28.995989 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 14:29:28.995995 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 14:29:28.996000 kernel: TAA: Mitigation: TSX disabled Sep 5 14:29:28.996006 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 5 14:29:28.996011 kernel: SRBDS: Mitigation: Microcode Sep 5 14:29:28.996016 kernel: GDS: Mitigation: Microcode Sep 5 14:29:28.996022 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 14:29:28.996027 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 14:29:28.996033 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 14:29:28.996038 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 5 14:29:28.996043 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 5 14:29:28.996049 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 14:29:28.996055 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 5 14:29:28.996060 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 5 14:29:28.996066 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 5 14:29:28.996071 kernel: Freeing SMP alternatives memory: 32K Sep 5 14:29:28.996077 kernel: pid_max: default: 32768 minimum: 301 Sep 5 14:29:28.996082 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 14:29:28.996087 kernel: landlock: Up and running. Sep 5 14:29:28.996093 kernel: SELinux: Initializing. Sep 5 14:29:28.996098 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 14:29:28.996104 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 14:29:28.996109 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 5 14:29:28.996114 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 14:29:28.996121 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 14:29:28.996126 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 14:29:28.996132 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 5 14:29:28.996137 kernel: ... version: 4 Sep 5 14:29:28.996143 kernel: ... bit width: 48 Sep 5 14:29:28.996148 kernel: ... generic registers: 4 Sep 5 14:29:28.996153 kernel: ... value mask: 0000ffffffffffff Sep 5 14:29:28.996159 kernel: ... max period: 00007fffffffffff Sep 5 14:29:28.996165 kernel: ... fixed-purpose events: 3 Sep 5 14:29:28.996171 kernel: ... event mask: 000000070000000f Sep 5 14:29:28.996176 kernel: signal: max sigframe size: 2032 Sep 5 14:29:28.996181 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 5 14:29:28.996187 kernel: rcu: Hierarchical SRCU implementation. Sep 5 14:29:28.996192 kernel: rcu: Max phase no-delay instances is 400. Sep 5 14:29:28.996198 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 5 14:29:28.996203 kernel: smp: Bringing up secondary CPUs ... Sep 5 14:29:28.996208 kernel: smpboot: x86: Booting SMP configuration: Sep 5 14:29:28.996214 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 5 14:29:28.996220 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 5 14:29:28.996226 kernel: smp: Brought up 1 node, 16 CPUs Sep 5 14:29:28.996231 kernel: smpboot: Max logical packages: 1 Sep 5 14:29:28.996237 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 5 14:29:28.996242 kernel: devtmpfs: initialized Sep 5 14:29:28.996248 kernel: x86/mm: Memory block size: 128MB Sep 5 14:29:28.996253 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b28000-0x81b28fff] (4096 bytes) Sep 5 14:29:28.996258 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 5 14:29:28.996265 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 14:29:28.996270 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 5 14:29:28.996276 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 14:29:28.996281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 14:29:28.996288 kernel: audit: initializing netlink subsys (disabled) Sep 5 14:29:28.996294 kernel: audit: type=2000 audit(1725546563.039:1): state=initialized audit_enabled=0 res=1 Sep 5 14:29:28.996299 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 14:29:28.996304 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 14:29:28.996310 kernel: cpuidle: using governor menu Sep 5 14:29:28.996316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 14:29:28.996321 kernel: dca service started, version 1.12.1 Sep 5 14:29:28.996327 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 5 14:29:28.996332 kernel: PCI: Using configuration type 1 for base access Sep 5 14:29:28.996338 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 5 14:29:28.996343 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 14:29:28.996348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 14:29:28.996354 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 14:29:28.996359 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 14:29:28.996366 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 14:29:28.996371 kernel: ACPI: Added _OSI(Module Device) Sep 5 14:29:28.996376 kernel: ACPI: Added _OSI(Processor Device) Sep 5 14:29:28.996382 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 5 14:29:28.996387 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 14:29:28.996393 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 5 14:29:28.996398 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996403 kernel: ACPI: SSDT 0xFFFF960480E43000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 5 14:29:28.996409 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996415 kernel: ACPI: SSDT 0xFFFF960481E10000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 5 14:29:28.996420 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996426 kernel: ACPI: SSDT 0xFFFF960480DEB500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 5 14:29:28.996431 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996437 kernel: ACPI: SSDT 0xFFFF960481E13000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 5 14:29:28.996442 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996447 kernel: ACPI: SSDT 0xFFFF960480E59000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 5 14:29:28.996453 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996458 kernel: ACPI: SSDT 0xFFFF960480E42400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 5 14:29:28.996463 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 5 14:29:28.996470 kernel: ACPI: Interpreter enabled Sep 5 14:29:28.996475 kernel: ACPI: PM: (supports S0 S5) Sep 5 14:29:28.996480 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 14:29:28.996486 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 5 14:29:28.996491 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 5 14:29:28.996496 kernel: HEST: Table parsing has been initialized. Sep 5 14:29:28.996502 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 5 14:29:28.996507 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 14:29:28.996513 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 14:29:28.996519 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 5 14:29:28.996525 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 5 14:29:28.996530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 5 14:29:28.996536 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 5 14:29:28.996541 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 5 14:29:28.996547 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 5 14:29:28.996552 kernel: ACPI: \_TZ_.FN00: New power resource Sep 5 14:29:28.996557 kernel: ACPI: \_TZ_.FN01: New power resource Sep 5 14:29:28.996563 kernel: ACPI: \_TZ_.FN02: New power resource Sep 5 14:29:28.996569 kernel: ACPI: \_TZ_.FN03: New power resource Sep 5 14:29:28.996575 kernel: ACPI: \_TZ_.FN04: New power resource Sep 5 14:29:28.996580 kernel: ACPI: \PIN_: New power resource Sep 5 14:29:28.996586 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 5 14:29:28.996662 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 14:29:28.996715 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 5 14:29:28.996763 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 5 14:29:28.996772 kernel: PCI host bridge to bus 0000:00 Sep 5 14:29:28.996823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 14:29:28.996868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 14:29:28.996911 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 14:29:28.996953 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 5 14:29:28.996995 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 5 14:29:28.997037 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 5 14:29:28.997100 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 5 14:29:28.997154 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 5 14:29:28.997204 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.997257 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 5 14:29:28.997308 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 5 14:29:28.997401 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 5 14:29:28.997454 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 5 14:29:28.997505 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 5 14:29:28.997554 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 5 14:29:28.997600 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 5 14:29:28.997654 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 5 14:29:28.997702 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 5 14:29:28.997751 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 5 14:29:28.997802 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 5 14:29:28.997851 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 14:29:28.997901 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 5 14:29:28.997949 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 14:29:28.998001 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 5 14:29:28.998051 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 5 14:29:28.998101 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 5 14:29:28.998158 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 5 14:29:28.998209 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 5 14:29:28.998256 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 5 14:29:28.998348 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 5 14:29:28.998397 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 5 14:29:28.998448 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 5 14:29:28.998499 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 5 14:29:28.998548 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 5 14:29:28.998595 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 5 14:29:28.998643 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 5 14:29:28.998690 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 5 14:29:28.998738 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 5 14:29:28.998790 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 5 14:29:28.998837 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 5 14:29:28.998892 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 5 14:29:28.998941 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.998995 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 5 14:29:28.999047 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999102 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 5 14:29:28.999151 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999204 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 5 14:29:28.999252 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999311 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 5 14:29:28.999401 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999454 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 5 14:29:28.999504 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 14:29:28.999559 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 5 14:29:28.999612 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 5 14:29:28.999662 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 5 14:29:28.999711 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 5 14:29:28.999763 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 5 14:29:28.999811 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 5 14:29:28.999867 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 5 14:29:28.999918 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 5 14:29:28.999970 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 5 14:29:29.000019 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 5 14:29:29.000068 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 14:29:29.000118 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 14:29:29.000172 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 5 14:29:29.000222 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 5 14:29:29.000272 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 5 14:29:29.000367 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 5 14:29:29.000417 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 14:29:29.000466 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 14:29:29.000515 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 14:29:29.000563 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 14:29:29.000611 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 14:29:29.000659 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 14:29:29.000712 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 5 14:29:29.000765 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 5 14:29:29.000818 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 5 14:29:29.000915 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 5 14:29:29.000964 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 5 14:29:29.001014 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:29.001062 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 14:29:29.001111 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 14:29:29.001161 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 14:29:29.001218 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 5 14:29:29.001270 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 5 14:29:29.001349 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 5 14:29:29.001415 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 5 14:29:29.001467 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 5 14:29:29.001572 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:29.001621 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 14:29:29.001673 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 14:29:29.001721 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 14:29:29.001770 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 14:29:29.001825 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 5 14:29:29.001875 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 5 14:29:29.001925 kernel: pci 0000:06:00.0: supports D1 D2 Sep 5 14:29:29.001974 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 14:29:29.002026 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 14:29:29.002074 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.002122 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.002175 kernel: pci_bus 0000:07: extended config space not accessible Sep 5 14:29:29.002232 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 5 14:29:29.002286 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 5 14:29:29.002388 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 5 14:29:29.002442 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 5 14:29:29.002493 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 14:29:29.002545 kernel: pci 0000:07:00.0: supports D1 D2 Sep 5 14:29:29.002596 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 14:29:29.002646 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 14:29:29.002696 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.002745 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.002753 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 5 14:29:29.002761 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 5 14:29:29.002767 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 5 14:29:29.002772 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 5 14:29:29.002778 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 5 14:29:29.002784 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 5 14:29:29.002790 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 5 14:29:29.002796 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 5 14:29:29.002802 kernel: iommu: Default domain type: Translated Sep 5 14:29:29.002808 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 14:29:29.002815 kernel: PCI: Using ACPI for IRQ routing Sep 5 14:29:29.002821 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 14:29:29.002826 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 5 14:29:29.002832 kernel: e820: reserve RAM buffer [mem 0x81b28000-0x83ffffff] Sep 5 14:29:29.002838 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 5 14:29:29.002843 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 5 14:29:29.002849 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 5 14:29:29.002855 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 5 14:29:29.002906 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 5 14:29:29.002959 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 5 14:29:29.003011 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 14:29:29.003019 kernel: vgaarb: loaded Sep 5 14:29:29.003026 kernel: clocksource: Switched to clocksource tsc-early Sep 5 14:29:29.003031 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 14:29:29.003037 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 14:29:29.003043 kernel: pnp: PnP ACPI init Sep 5 14:29:29.003094 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 5 14:29:29.003146 kernel: pnp 00:02: [dma 0 disabled] Sep 5 14:29:29.003194 kernel: pnp 00:03: [dma 0 disabled] Sep 5 14:29:29.003242 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 5 14:29:29.003290 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 5 14:29:29.003338 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 5 14:29:29.003386 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 5 14:29:29.003432 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 5 14:29:29.003477 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 5 14:29:29.003521 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 5 14:29:29.003564 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 5 14:29:29.003607 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 5 14:29:29.003652 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 5 14:29:29.003696 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 5 14:29:29.003746 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 5 14:29:29.003791 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 5 14:29:29.003838 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 5 14:29:29.003882 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 5 14:29:29.003926 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 5 14:29:29.003970 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 5 14:29:29.004014 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 5 14:29:29.004065 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 5 14:29:29.004074 kernel: pnp: PnP ACPI: found 10 devices Sep 5 14:29:29.004080 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 14:29:29.004086 kernel: NET: Registered PF_INET protocol family Sep 5 14:29:29.004092 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 14:29:29.004098 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 5 14:29:29.004104 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 14:29:29.004109 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 14:29:29.004117 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 5 14:29:29.004123 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 5 14:29:29.004129 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 14:29:29.004134 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 14:29:29.004140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 14:29:29.004146 kernel: NET: Registered PF_XDP protocol family Sep 5 14:29:29.004194 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 5 14:29:29.004245 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 5 14:29:29.004297 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 5 14:29:29.004351 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004402 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004452 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004502 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004552 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 14:29:29.004600 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 14:29:29.004649 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 14:29:29.004698 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 14:29:29.004749 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 14:29:29.004798 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 14:29:29.004849 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 14:29:29.004898 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 14:29:29.004948 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 14:29:29.004997 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 14:29:29.005045 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 14:29:29.005095 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 14:29:29.005144 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.005194 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.005242 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 14:29:29.005294 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.005344 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.005392 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 5 14:29:29.005436 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 14:29:29.005480 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 14:29:29.005523 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 14:29:29.005566 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 5 14:29:29.005610 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 5 14:29:29.005661 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 5 14:29:29.005707 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 14:29:29.005757 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 5 14:29:29.005803 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 5 14:29:29.005850 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 14:29:29.005895 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 5 14:29:29.005942 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 5 14:29:29.005988 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 5 14:29:29.006036 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 5 14:29:29.006083 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 5 14:29:29.006092 kernel: PCI: CLS 64 bytes, default 64 Sep 5 14:29:29.006098 kernel: DMAR: No ATSR found Sep 5 14:29:29.006104 kernel: DMAR: No SATC found Sep 5 14:29:29.006110 kernel: DMAR: dmar0: Using Queued invalidation Sep 5 14:29:29.006157 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 5 14:29:29.006207 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 5 14:29:29.006259 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 5 14:29:29.006311 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 5 14:29:29.006360 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 5 14:29:29.006408 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 5 14:29:29.006458 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 5 14:29:29.006506 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 5 14:29:29.006555 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 5 14:29:29.006604 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 5 14:29:29.006656 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 5 14:29:29.006703 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 5 14:29:29.006753 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 5 14:29:29.006801 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 5 14:29:29.006849 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 5 14:29:29.006897 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 5 14:29:29.006946 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 5 14:29:29.006993 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 5 14:29:29.007045 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 5 14:29:29.007094 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 5 14:29:29.007143 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 5 14:29:29.007193 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 5 14:29:29.007242 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 5 14:29:29.007296 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 5 14:29:29.007392 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 5 14:29:29.007443 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 5 14:29:29.007498 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 5 14:29:29.007506 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 5 14:29:29.007512 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 5 14:29:29.007518 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 5 14:29:29.007524 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 5 14:29:29.007530 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 5 14:29:29.007536 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 5 14:29:29.007542 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 5 14:29:29.007595 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 5 14:29:29.007606 kernel: Initialise system trusted keyrings Sep 5 14:29:29.007612 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 5 14:29:29.007617 kernel: Key type asymmetric registered Sep 5 14:29:29.007623 kernel: Asymmetric key parser 'x509' registered Sep 5 14:29:29.007629 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 14:29:29.007635 kernel: io scheduler mq-deadline registered Sep 5 14:29:29.007640 kernel: io scheduler kyber registered Sep 5 14:29:29.007646 kernel: io scheduler bfq registered Sep 5 14:29:29.007696 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 5 14:29:29.007745 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 5 14:29:29.007795 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 5 14:29:29.007844 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 5 14:29:29.007893 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 5 14:29:29.007941 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 5 14:29:29.007995 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 5 14:29:29.008006 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 5 14:29:29.008012 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 5 14:29:29.008018 kernel: pstore: Using crash dump compression: deflate Sep 5 14:29:29.008023 kernel: pstore: Registered erst as persistent store backend Sep 5 14:29:29.008029 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 14:29:29.008035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 14:29:29.008041 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 14:29:29.008047 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 5 14:29:29.008053 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 5 14:29:29.008103 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 5 14:29:29.008112 kernel: i8042: PNP: No PS/2 controller found. Sep 5 14:29:29.008157 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 5 14:29:29.008202 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 5 14:29:29.008247 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-09-05T14:29:27 UTC (1725546567) Sep 5 14:29:29.008293 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 5 14:29:29.008302 kernel: intel_pstate: Intel P-state driver initializing Sep 5 14:29:29.008307 kernel: intel_pstate: Disabling energy efficiency optimization Sep 5 14:29:29.008315 kernel: intel_pstate: HWP enabled Sep 5 14:29:29.008321 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 5 14:29:29.008327 kernel: vesafb: scrolling: redraw Sep 5 14:29:29.008332 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 5 14:29:29.008338 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000eef478d1, using 768k, total 768k Sep 5 14:29:29.008344 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 14:29:29.008350 kernel: fb0: VESA VGA frame buffer device Sep 5 14:29:29.008356 kernel: NET: Registered PF_INET6 protocol family Sep 5 14:29:29.008361 kernel: Segment Routing with IPv6 Sep 5 14:29:29.008368 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 14:29:29.008374 kernel: NET: Registered PF_PACKET protocol family Sep 5 14:29:29.008379 kernel: Key type dns_resolver registered Sep 5 14:29:29.008385 kernel: microcode: Microcode Update Driver: v2.2. Sep 5 14:29:29.008391 kernel: IPI shorthand broadcast: enabled Sep 5 14:29:29.008397 kernel: sched_clock: Marking stable (2477000774, 1380768469)->(4395427857, -537658614) Sep 5 14:29:29.008403 kernel: registered taskstats version 1 Sep 5 14:29:29.008408 kernel: Loading compiled-in X.509 certificates Sep 5 14:29:29.008414 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 5 14:29:29.008421 kernel: Key type .fscrypt registered Sep 5 14:29:29.008427 kernel: Key type fscrypt-provisioning registered Sep 5 14:29:29.008432 kernel: ima: Allocated hash algorithm: sha1 Sep 5 14:29:29.008438 kernel: ima: No architecture policies found Sep 5 14:29:29.008444 kernel: clk: Disabling unused clocks Sep 5 14:29:29.008450 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 5 14:29:29.008456 kernel: Write protecting the kernel read-only data: 36864k Sep 5 14:29:29.008461 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 5 14:29:29.008468 kernel: Run /init as init process Sep 5 14:29:29.008474 kernel: with arguments: Sep 5 14:29:29.008480 kernel: /init Sep 5 14:29:29.008486 kernel: with environment: Sep 5 14:29:29.008491 kernel: HOME=/ Sep 5 14:29:29.008497 kernel: TERM=linux Sep 5 14:29:29.008503 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 14:29:29.008510 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 14:29:29.008517 systemd[1]: Detected architecture x86-64. Sep 5 14:29:29.008524 systemd[1]: Running in initrd. Sep 5 14:29:29.008530 systemd[1]: No hostname configured, using default hostname. Sep 5 14:29:29.008536 systemd[1]: Hostname set to . Sep 5 14:29:29.008542 systemd[1]: Initializing machine ID from random generator. Sep 5 14:29:29.008548 systemd[1]: Queued start job for default target initrd.target. Sep 5 14:29:29.008554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 14:29:29.008560 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 14:29:29.008568 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 14:29:29.008574 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 14:29:29.008580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-ROOT.device - /dev/disk/by-partlabel/ROOT... Sep 5 14:29:29.008586 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 14:29:29.008593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 14:29:29.008599 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 14:29:29.008605 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Sep 5 14:29:29.008612 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Sep 5 14:29:29.008618 kernel: clocksource: Switched to clocksource tsc Sep 5 14:29:29.008624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 14:29:29.008630 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 14:29:29.008636 systemd[1]: Reached target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 14:29:29.008642 systemd[1]: Reached target paths.target - Path Units. Sep 5 14:29:29.008648 systemd[1]: Reached target slices.target - Slice Units. Sep 5 14:29:29.008654 systemd[1]: Reached target swap.target - Swaps. Sep 5 14:29:29.008660 systemd[1]: Reached target timers.target - Timer Units. Sep 5 14:29:29.008668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 14:29:29.008674 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 14:29:29.008680 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 14:29:29.008686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 14:29:29.008692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 14:29:29.008698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 14:29:29.008704 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 14:29:29.008710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 14:29:29.008717 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 14:29:29.008723 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 14:29:29.008729 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 14:29:29.008746 systemd-journald[261]: Collecting audit messages is disabled. Sep 5 14:29:29.008761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 14:29:29.008768 systemd-journald[261]: Journal started Sep 5 14:29:29.008781 systemd-journald[261]: Runtime Journal (/run/log/journal/ed8af417aa40402cbd4319852170d6c1) is 8.0M, max 639.9M, 631.9M free. Sep 5 14:29:29.015367 systemd-modules-load[262]: Inserted module 'overlay' Sep 5 14:29:29.044297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 14:29:29.087324 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 14:29:29.087343 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 14:29:29.106336 kernel: Bridge firewalling registered Sep 5 14:29:29.106352 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 14:29:29.123763 systemd-modules-load[262]: Inserted module 'br_netfilter' Sep 5 14:29:29.142608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 14:29:29.172542 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 14:29:29.180570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 14:29:29.198738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:29.235597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 14:29:29.236020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 14:29:29.236424 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 14:29:29.236845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 14:29:29.241295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 14:29:29.242027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 14:29:29.242143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 14:29:29.243070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 14:29:29.247651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:29.259641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 14:29:29.300697 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 14:29:29.328644 dracut-cmdline[296]: dracut-dracut-053 Sep 5 14:29:29.336519 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 14:29:29.533324 kernel: SCSI subsystem initialized Sep 5 14:29:29.556317 kernel: Loading iSCSI transport class v2.0-870. Sep 5 14:29:29.579290 kernel: iscsi: registered transport (tcp) Sep 5 14:29:29.611687 kernel: iscsi: registered transport (qla4xxx) Sep 5 14:29:29.611703 kernel: QLogic iSCSI HBA Driver Sep 5 14:29:29.644877 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 14:29:29.664556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 14:29:29.756918 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 14:29:29.756953 kernel: device-mapper: uevent: version 1.0.3 Sep 5 14:29:29.768316 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 14:29:29.836363 kernel: raid6: avx2x4 gen() 52429 MB/s Sep 5 14:29:29.868370 kernel: raid6: avx2x2 gen() 53380 MB/s Sep 5 14:29:29.905149 kernel: raid6: avx2x1 gen() 45235 MB/s Sep 5 14:29:29.905165 kernel: raid6: using algorithm avx2x2 gen() 53380 MB/s Sep 5 14:29:29.953065 kernel: raid6: .... xor() 31440 MB/s, rmw enabled Sep 5 14:29:29.953082 kernel: raid6: using avx2x2 recovery algorithm Sep 5 14:29:29.994314 kernel: xor: automatically using best checksumming function avx Sep 5 14:29:30.112321 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 14:29:30.117910 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 14:29:30.148625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 14:29:30.155455 systemd-udevd[482]: Using default interface naming scheme 'v255'. Sep 5 14:29:30.157962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 14:29:30.185445 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 14:29:30.237465 dracut-pre-trigger[494]: rd.md=0: removing MD RAID activation Sep 5 14:29:30.255040 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 14:29:30.275626 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 14:29:30.335829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 14:29:30.378463 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 14:29:30.378482 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 14:29:30.357628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 14:29:30.409338 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 14:29:30.409354 kernel: ACPI: bus type USB registered Sep 5 14:29:30.357661 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:30.437069 kernel: usbcore: registered new interface driver usbfs Sep 5 14:29:30.437084 kernel: usbcore: registered new interface driver hub Sep 5 14:29:30.416133 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 14:29:30.476306 kernel: usbcore: registered new device driver usb Sep 5 14:29:30.476332 kernel: PTP clock support registered Sep 5 14:29:30.476350 kernel: libata version 3.00 loaded. Sep 5 14:29:30.466556 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 14:29:30.507826 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 14:29:30.507845 kernel: AES CTR mode by8 optimization enabled Sep 5 14:29:30.471751 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:31.300415 kernel: ahci 0000:00:17.0: version 3.0 Sep 5 14:29:31.300515 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 14:29:31.300582 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 5 14:29:31.300644 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 5 14:29:31.300704 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 5 14:29:31.300767 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 5 14:29:31.300827 kernel: scsi host0: ahci Sep 5 14:29:31.300891 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 14:29:31.300951 kernel: scsi host1: ahci Sep 5 14:29:31.301020 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 5 14:29:31.301080 kernel: scsi host2: ahci Sep 5 14:29:31.301138 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 5 14:29:31.301200 kernel: scsi host3: ahci Sep 5 14:29:31.301259 kernel: hub 1-0:1.0: USB hub found Sep 5 14:29:31.301334 kernel: scsi host4: ahci Sep 5 14:29:31.301395 kernel: hub 1-0:1.0: 16 ports detected Sep 5 14:29:31.301460 kernel: scsi host5: ahci Sep 5 14:29:31.301522 kernel: hub 2-0:1.0: USB hub found Sep 5 14:29:31.301592 kernel: scsi host6: ahci Sep 5 14:29:31.301651 kernel: hub 2-0:1.0: 10 ports detected Sep 5 14:29:31.301716 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 5 14:29:31.301724 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 5 14:29:31.301731 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 5 14:29:31.301738 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 5 14:29:31.301745 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 5 14:29:31.301752 kernel: pps pps0: new PPS source ptp0 Sep 5 14:29:31.301813 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 5 14:29:31.301822 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 5 14:29:31.301890 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 5 14:29:31.301898 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 14:29:31.301959 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 5 14:29:31.301968 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:24 Sep 5 14:29:31.302027 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 5 14:29:31.302037 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 5 14:29:31.302098 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 5 14:29:31.302193 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 14:29:31.302257 kernel: hub 1-14:1.0: USB hub found Sep 5 14:29:31.302333 kernel: pps pps1: new PPS source ptp1 Sep 5 14:29:31.302394 kernel: hub 1-14:1.0: 4 ports detected Sep 5 14:29:31.302462 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 5 14:29:31.302529 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 14:29:31.302590 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:25 Sep 5 14:29:31.302651 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 5 14:29:31.302711 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302719 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 14:29:31.302780 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302788 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 14:29:31.302795 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302803 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302810 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 14:29:31.302817 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 14:29:31.302824 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:30.507438 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 14:29:31.371398 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 14:29:31.371410 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Sep 5 14:29:31.371491 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 14:29:31.371499 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 14:29:31.371565 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 14:29:31.371573 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 5 14:29:30.859413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 14:29:31.457799 kernel: ata1.00: Features: NCQ-prio Sep 5 14:29:31.457812 kernel: ata2.00: Features: NCQ-prio Sep 5 14:29:31.457819 kernel: ata1.00: configured for UDMA/133 Sep 5 14:29:31.472290 kernel: ata2.00: configured for UDMA/133 Sep 5 14:29:31.472306 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 14:29:31.485339 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 14:29:31.485372 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 14:29:31.557109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:31.589467 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 5 14:29:31.589599 kernel: usbcore: registered new interface driver usbhid Sep 5 14:29:31.589614 kernel: usbhid: USB HID core driver Sep 5 14:29:31.603333 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 5 14:29:31.603351 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 5 14:29:31.629344 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 14:29:31.660336 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 5 14:29:31.678615 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 5 14:29:31.678702 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 5 14:29:31.683522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 14:29:31.758019 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 5 14:29:31.766715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:31.802116 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 14:29:31.802141 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 14:29:31.802151 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 14:29:31.834277 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 14:29:31.834514 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 5 14:29:31.834684 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 5 14:29:31.834856 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 5 14:29:31.840343 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 5 14:29:31.849593 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 5 14:29:31.849679 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 5 14:29:31.849744 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 14:29:31.867415 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 14:29:31.867513 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 5 14:29:31.887815 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 5 14:29:31.888340 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 14:29:32.013746 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 14:29:32.034348 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 5 14:29:32.034430 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 14:29:32.064907 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 5 14:29:32.103187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 14:29:32.233366 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (528) Sep 5 14:29:32.233387 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 14:29:32.233518 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (577) Sep 5 14:29:32.233532 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Sep 5 14:29:32.233636 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 14:29:32.213228 systemd[1]: Found device dev-disk-by\x2dpartlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 14:29:32.256999 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 14:29:32.268365 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 14:29:32.282840 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 14:29:32.316384 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 14:29:32.342424 systemd[1]: Starting decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition... Sep 5 14:29:32.370734 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 14:29:32.381597 systemd[1]: Finished decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 14:29:32.417517 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 14:29:32.417560 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 14:29:32.435838 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 14:29:32.476279 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 14:29:32.536348 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 14:29:32.536452 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 5 14:29:32.505372 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 14:29:32.521389 systemd[1]: Reached target basic.target - Basic System. Sep 5 14:29:32.527440 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 14:29:32.558409 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 14:29:32.606422 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 5 14:29:32.606440 sh[707]: Success Sep 5 14:29:32.596866 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 14:29:32.618058 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 14:29:32.621435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 14:29:32.652561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 14:29:32.682501 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 14:29:32.732290 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 14:29:32.733500 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 14:29:32.756545 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 5 14:29:32.734534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 14:29:32.788364 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 5 14:29:32.776540 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 14:29:32.814508 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 14:29:32.823944 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 14:29:32.953528 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 5 14:29:32.953543 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 14:29:32.953551 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 14:29:32.953559 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 14:29:32.953567 kernel: BTRFS info (device dm-0): using free space tree Sep 5 14:29:32.953574 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 14:29:32.953859 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 14:29:32.995988 systemd-fsck[758]: ROOT: clean, 85/553520 files, 82898/553472 blocks Sep 5 14:29:33.006859 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 14:29:33.030479 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 14:29:33.136356 kernel: EXT4-fs (sdb9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 5 14:29:33.136875 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 14:29:33.152600 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 14:29:33.170361 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 14:29:33.190076 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 14:29:33.288739 kernel: BTRFS info (device sdb6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 5 14:29:33.288757 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 5 14:29:33.288766 kernel: BTRFS info (device sdb6): using free space tree Sep 5 14:29:33.288773 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 5 14:29:33.288780 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 5 14:29:33.278460 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 14:29:33.302018 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 14:29:33.332530 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 14:29:33.396502 initrd-setup-root[790]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 14:29:33.407416 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory Sep 5 14:29:33.417405 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 14:29:33.427386 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 14:29:33.496309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 14:29:33.521534 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 14:29:33.533731 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 14:29:33.556848 systemd[1]: Reached target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 14:29:33.592683 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 14:29:33.592683 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 14:29:33.629456 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 14:29:33.596650 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 14:29:33.656517 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 14:29:33.656569 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 14:29:33.674688 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 14:29:33.685518 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 14:29:33.713576 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 14:29:33.730605 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 14:29:33.798352 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 14:29:33.825703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 14:29:33.850270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 14:29:33.850717 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 14:29:33.877827 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 14:29:33.878187 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 14:29:33.906049 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 14:29:33.927869 systemd[1]: Stopped target basic.target - Basic System. Sep 5 14:29:33.946866 systemd[1]: Stopped target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 14:29:33.965853 systemd[1]: Stopped target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 14:29:33.989861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 14:29:34.013982 systemd[1]: Stopped target paths.target - Path Units. Sep 5 14:29:34.031876 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 14:29:34.050976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 14:29:34.071866 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 14:29:34.093978 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 14:29:34.113993 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 14:29:34.132979 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 14:29:34.152861 systemd[1]: Stopped target swap.target - Swaps. Sep 5 14:29:34.171938 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 14:29:34.172209 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 14:29:34.189004 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 14:29:34.189268 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 14:29:34.208850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 14:29:34.209192 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 14:29:34.234962 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 14:29:34.254749 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 14:29:34.255193 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 14:29:34.273874 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 14:29:34.294750 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 14:29:34.295155 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 14:29:34.315763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 14:29:34.316120 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 14:29:34.348805 systemd[1]: decrypt-root.service: Deactivated successfully. Sep 5 14:29:34.349206 systemd[1]: Stopped decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 14:29:34.371938 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 14:29:34.372275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 14:29:34.390916 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 14:29:34.391273 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 14:29:34.414933 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 14:29:34.415270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 14:29:34.432916 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 14:29:34.433252 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 14:29:34.452909 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 14:29:34.453249 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 14:29:34.470926 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 14:29:34.471265 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 14:29:34.495919 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 14:29:34.496262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 14:29:34.516933 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 14:29:34.517271 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 14:29:34.548339 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 14:29:34.578824 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 14:29:34.578914 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 14:29:34.600463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 14:29:34.600561 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 14:29:34.619498 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 14:29:34.619645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 14:29:34.638745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 14:29:34.638859 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 14:29:34.658587 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 14:29:34.658738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 14:29:34.688844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 14:29:34.689015 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 14:29:34.720836 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 14:29:34.721005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:34.763365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 14:29:34.782447 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 14:29:35.031491 systemd-journald[261]: Received SIGTERM from PID 1 (systemd). Sep 5 14:29:34.782537 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 14:29:34.801583 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 14:29:34.801653 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 14:29:34.823640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 14:29:34.823758 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 14:29:34.842707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 14:29:34.842865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:34.867749 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 14:29:34.868021 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 14:29:34.889661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 14:29:34.917683 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 14:29:34.973138 systemd[1]: Switching root. Sep 5 14:29:35.031755 systemd-journald[261]: Journal stopped Sep 5 14:29:28.994855 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27 Sep 5 14:29:28.994868 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 5 14:29:28.994875 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 14:29:28.994880 kernel: BIOS-provided physical RAM map: Sep 5 14:29:28.994884 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 5 14:29:28.994888 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 5 14:29:28.994893 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 5 14:29:28.994897 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 5 14:29:28.994901 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 5 14:29:28.994905 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b27fff] usable Sep 5 14:29:28.994909 kernel: BIOS-e820: [mem 0x0000000081b28000-0x0000000081b28fff] ACPI NVS Sep 5 14:29:28.994914 kernel: BIOS-e820: [mem 0x0000000081b29000-0x0000000081b29fff] reserved Sep 5 14:29:28.994919 kernel: BIOS-e820: [mem 0x0000000081b2a000-0x000000008afccfff] usable Sep 5 14:29:28.994923 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 5 14:29:28.994928 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 5 14:29:28.994933 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 5 14:29:28.994938 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 5 14:29:28.994943 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 5 14:29:28.994948 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 5 14:29:28.994952 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 14:29:28.994957 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 5 14:29:28.994962 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 5 14:29:28.994966 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 5 14:29:28.994971 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 5 14:29:28.994976 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 5 14:29:28.994980 kernel: NX (Execute Disable) protection: active Sep 5 14:29:28.994985 kernel: APIC: Static calls initialized Sep 5 14:29:28.994990 kernel: SMBIOS 3.2.1 present. Sep 5 14:29:28.994996 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 5 14:29:28.995001 kernel: tsc: Detected 3400.000 MHz processor Sep 5 14:29:28.995005 kernel: tsc: Detected 3399.906 MHz TSC Sep 5 14:29:28.995010 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 14:29:28.995015 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 14:29:28.995020 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 5 14:29:28.995025 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 5 14:29:28.995030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 14:29:28.995035 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 5 14:29:28.995040 kernel: Using GB pages for direct mapping Sep 5 14:29:28.995045 kernel: ACPI: Early table checksum verification disabled Sep 5 14:29:28.995050 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 5 14:29:28.995057 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 5 14:29:28.995062 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 5 14:29:28.995067 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 5 14:29:28.995072 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 5 14:29:28.995078 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 5 14:29:28.995083 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 5 14:29:28.995088 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 5 14:29:28.995093 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 5 14:29:28.995098 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 5 14:29:28.995103 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 5 14:29:28.995108 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 5 14:29:28.995114 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 5 14:29:28.995120 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995125 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 5 14:29:28.995130 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 5 14:29:28.995135 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995140 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995145 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 5 14:29:28.995150 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 5 14:29:28.995155 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995161 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 5 14:29:28.995166 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 5 14:29:28.995171 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 5 14:29:28.995176 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 5 14:29:28.995181 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 5 14:29:28.995186 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 5 14:29:28.995191 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 5 14:29:28.995196 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 5 14:29:28.995202 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 5 14:29:28.995207 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 5 14:29:28.995212 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 5 14:29:28.995217 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 5 14:29:28.995222 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 5 14:29:28.995227 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 5 14:29:28.995232 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 5 14:29:28.995237 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 5 14:29:28.995242 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 5 14:29:28.995248 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 5 14:29:28.995253 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 5 14:29:28.995258 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 5 14:29:28.995263 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 5 14:29:28.995268 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 5 14:29:28.995273 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 5 14:29:28.995278 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 5 14:29:28.995286 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 5 14:29:28.995291 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 5 14:29:28.995296 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 5 14:29:28.995302 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 5 14:29:28.995307 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 5 14:29:28.995312 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 5 14:29:28.995317 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 5 14:29:28.995322 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 5 14:29:28.995327 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 5 14:29:28.995332 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 5 14:29:28.995337 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 5 14:29:28.995342 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 5 14:29:28.995348 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 5 14:29:28.995353 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 5 14:29:28.995358 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 5 14:29:28.995363 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 5 14:29:28.995368 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 5 14:29:28.995373 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 5 14:29:28.995378 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 5 14:29:28.995383 kernel: No NUMA configuration found Sep 5 14:29:28.995388 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 5 14:29:28.995394 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 5 14:29:28.995399 kernel: Zone ranges: Sep 5 14:29:28.995404 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 14:29:28.995409 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 5 14:29:28.995414 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 5 14:29:28.995419 kernel: Movable zone start for each node Sep 5 14:29:28.995425 kernel: Early memory node ranges Sep 5 14:29:28.995430 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 5 14:29:28.995435 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 5 14:29:28.995440 kernel: node 0: [mem 0x0000000040400000-0x0000000081b27fff] Sep 5 14:29:28.995446 kernel: node 0: [mem 0x0000000081b2a000-0x000000008afccfff] Sep 5 14:29:28.995451 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 5 14:29:28.995456 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 5 14:29:28.995465 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 5 14:29:28.995470 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 5 14:29:28.995476 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 14:29:28.995481 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 5 14:29:28.995488 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 5 14:29:28.995493 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 5 14:29:28.995498 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 5 14:29:28.995504 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 5 14:29:28.995509 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 5 14:29:28.995515 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 5 14:29:28.995520 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 5 14:29:28.995526 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 5 14:29:28.995531 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 5 14:29:28.995537 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 5 14:29:28.995543 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 5 14:29:28.995548 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 5 14:29:28.995553 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 5 14:29:28.995559 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 5 14:29:28.995564 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 5 14:29:28.995569 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 5 14:29:28.995575 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 5 14:29:28.995580 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 5 14:29:28.995586 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 5 14:29:28.995592 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 5 14:29:28.995597 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 5 14:29:28.995602 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 5 14:29:28.995608 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 5 14:29:28.995613 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 5 14:29:28.995618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 14:29:28.995624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 14:29:28.995629 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 14:29:28.995635 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 14:29:28.995641 kernel: TSC deadline timer available Sep 5 14:29:28.995646 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 5 14:29:28.995652 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 5 14:29:28.995657 kernel: Booting paravirtualized kernel on bare hardware Sep 5 14:29:28.995663 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 14:29:28.995668 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 5 14:29:28.995674 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Sep 5 14:29:28.995679 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Sep 5 14:29:28.995684 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 5 14:29:28.995691 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 14:29:28.995697 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 14:29:28.995702 kernel: random: crng init done Sep 5 14:29:28.995707 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 5 14:29:28.995713 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 5 14:29:28.995718 kernel: Fallback order for Node 0: 0 Sep 5 14:29:28.995724 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 5 14:29:28.995729 kernel: Policy zone: Normal Sep 5 14:29:28.995735 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 14:29:28.995741 kernel: software IO TLB: area num 16. Sep 5 14:29:28.995746 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 732412K reserved, 0K cma-reserved) Sep 5 14:29:28.995752 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 5 14:29:28.995757 kernel: ftrace: allocating 37748 entries in 148 pages Sep 5 14:29:28.995763 kernel: ftrace: allocated 148 pages with 3 groups Sep 5 14:29:28.995768 kernel: Dynamic Preempt: voluntary Sep 5 14:29:28.995774 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 14:29:28.995779 kernel: rcu: RCU event tracing is enabled. Sep 5 14:29:28.995786 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 5 14:29:28.995792 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 14:29:28.995797 kernel: Rude variant of Tasks RCU enabled. Sep 5 14:29:28.995802 kernel: Tracing variant of Tasks RCU enabled. Sep 5 14:29:28.995808 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 14:29:28.995813 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 5 14:29:28.995819 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 5 14:29:28.995824 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 14:29:28.995829 kernel: Console: colour dummy device 80x25 Sep 5 14:29:28.995836 kernel: printk: console [tty0] enabled Sep 5 14:29:28.995841 kernel: printk: console [ttyS1] enabled Sep 5 14:29:28.995846 kernel: ACPI: Core revision 20230628 Sep 5 14:29:28.995852 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 5 14:29:28.995857 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 14:29:28.995863 kernel: DMAR: Host address width 39 Sep 5 14:29:28.995868 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 5 14:29:28.995873 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 5 14:29:28.995879 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 5 14:29:28.995885 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 5 14:29:28.995890 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 5 14:29:28.995896 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 5 14:29:28.995901 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 5 14:29:28.995907 kernel: x2apic enabled Sep 5 14:29:28.995912 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 5 14:29:28.995918 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 5 14:29:28.995923 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 5 14:29:28.995929 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 5 14:29:28.995935 kernel: process: using mwait in idle threads Sep 5 14:29:28.995940 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 5 14:29:28.995946 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 5 14:29:28.995951 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 14:29:28.995956 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 5 14:29:28.995962 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 5 14:29:28.995967 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 5 14:29:28.995972 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 5 14:29:28.995978 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 5 14:29:28.995983 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 5 14:29:28.995989 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 14:29:28.995995 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 14:29:28.996000 kernel: TAA: Mitigation: TSX disabled Sep 5 14:29:28.996006 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 5 14:29:28.996011 kernel: SRBDS: Mitigation: Microcode Sep 5 14:29:28.996016 kernel: GDS: Mitigation: Microcode Sep 5 14:29:28.996022 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 14:29:28.996027 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 14:29:28.996033 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 14:29:28.996038 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 5 14:29:28.996043 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 5 14:29:28.996049 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 14:29:28.996055 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 5 14:29:28.996060 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 5 14:29:28.996066 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 5 14:29:28.996071 kernel: Freeing SMP alternatives memory: 32K Sep 5 14:29:28.996077 kernel: pid_max: default: 32768 minimum: 301 Sep 5 14:29:28.996082 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 14:29:28.996087 kernel: landlock: Up and running. Sep 5 14:29:28.996093 kernel: SELinux: Initializing. Sep 5 14:29:28.996098 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 14:29:28.996104 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 14:29:28.996109 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 5 14:29:28.996114 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 14:29:28.996121 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 14:29:28.996126 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Sep 5 14:29:28.996132 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 5 14:29:28.996137 kernel: ... version: 4 Sep 5 14:29:28.996143 kernel: ... bit width: 48 Sep 5 14:29:28.996148 kernel: ... generic registers: 4 Sep 5 14:29:28.996153 kernel: ... value mask: 0000ffffffffffff Sep 5 14:29:28.996159 kernel: ... max period: 00007fffffffffff Sep 5 14:29:28.996165 kernel: ... fixed-purpose events: 3 Sep 5 14:29:28.996171 kernel: ... event mask: 000000070000000f Sep 5 14:29:28.996176 kernel: signal: max sigframe size: 2032 Sep 5 14:29:28.996181 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 5 14:29:28.996187 kernel: rcu: Hierarchical SRCU implementation. Sep 5 14:29:28.996192 kernel: rcu: Max phase no-delay instances is 400. Sep 5 14:29:28.996198 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 5 14:29:28.996203 kernel: smp: Bringing up secondary CPUs ... Sep 5 14:29:28.996208 kernel: smpboot: x86: Booting SMP configuration: Sep 5 14:29:28.996214 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 5 14:29:28.996220 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 5 14:29:28.996226 kernel: smp: Brought up 1 node, 16 CPUs Sep 5 14:29:28.996231 kernel: smpboot: Max logical packages: 1 Sep 5 14:29:28.996237 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 5 14:29:28.996242 kernel: devtmpfs: initialized Sep 5 14:29:28.996248 kernel: x86/mm: Memory block size: 128MB Sep 5 14:29:28.996253 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b28000-0x81b28fff] (4096 bytes) Sep 5 14:29:28.996258 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 5 14:29:28.996265 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 14:29:28.996270 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 5 14:29:28.996276 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 14:29:28.996281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 14:29:28.996288 kernel: audit: initializing netlink subsys (disabled) Sep 5 14:29:28.996294 kernel: audit: type=2000 audit(1725546563.039:1): state=initialized audit_enabled=0 res=1 Sep 5 14:29:28.996299 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 14:29:28.996304 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 14:29:28.996310 kernel: cpuidle: using governor menu Sep 5 14:29:28.996316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 14:29:28.996321 kernel: dca service started, version 1.12.1 Sep 5 14:29:28.996327 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 5 14:29:28.996332 kernel: PCI: Using configuration type 1 for base access Sep 5 14:29:28.996338 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 5 14:29:28.996343 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 14:29:28.996348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 14:29:28.996354 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 14:29:28.996359 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 14:29:28.996366 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 14:29:28.996371 kernel: ACPI: Added _OSI(Module Device) Sep 5 14:29:28.996376 kernel: ACPI: Added _OSI(Processor Device) Sep 5 14:29:28.996382 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 5 14:29:28.996387 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 14:29:28.996393 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 5 14:29:28.996398 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996403 kernel: ACPI: SSDT 0xFFFF960480E43000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 5 14:29:28.996409 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996415 kernel: ACPI: SSDT 0xFFFF960481E10000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 5 14:29:28.996420 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996426 kernel: ACPI: SSDT 0xFFFF960480DEB500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 5 14:29:28.996431 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996437 kernel: ACPI: SSDT 0xFFFF960481E13000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 5 14:29:28.996442 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996447 kernel: ACPI: SSDT 0xFFFF960480E59000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 5 14:29:28.996453 kernel: ACPI: Dynamic OEM Table Load: Sep 5 14:29:28.996458 kernel: ACPI: SSDT 0xFFFF960480E42400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 5 14:29:28.996463 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 5 14:29:28.996470 kernel: ACPI: Interpreter enabled Sep 5 14:29:28.996475 kernel: ACPI: PM: (supports S0 S5) Sep 5 14:29:28.996480 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 14:29:28.996486 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 5 14:29:28.996491 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 5 14:29:28.996496 kernel: HEST: Table parsing has been initialized. Sep 5 14:29:28.996502 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 5 14:29:28.996507 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 14:29:28.996513 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 14:29:28.996519 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 5 14:29:28.996525 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 5 14:29:28.996530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 5 14:29:28.996536 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 5 14:29:28.996541 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 5 14:29:28.996547 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 5 14:29:28.996552 kernel: ACPI: \_TZ_.FN00: New power resource Sep 5 14:29:28.996557 kernel: ACPI: \_TZ_.FN01: New power resource Sep 5 14:29:28.996563 kernel: ACPI: \_TZ_.FN02: New power resource Sep 5 14:29:28.996569 kernel: ACPI: \_TZ_.FN03: New power resource Sep 5 14:29:28.996575 kernel: ACPI: \_TZ_.FN04: New power resource Sep 5 14:29:28.996580 kernel: ACPI: \PIN_: New power resource Sep 5 14:29:28.996586 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 5 14:29:28.996662 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 14:29:28.996715 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 5 14:29:28.996763 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 5 14:29:28.996772 kernel: PCI host bridge to bus 0000:00 Sep 5 14:29:28.996823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 14:29:28.996868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 14:29:28.996911 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 14:29:28.996953 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 5 14:29:28.996995 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 5 14:29:28.997037 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 5 14:29:28.997100 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 5 14:29:28.997154 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 5 14:29:28.997204 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.997257 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 5 14:29:28.997308 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 5 14:29:28.997401 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 5 14:29:28.997454 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 5 14:29:28.997505 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 5 14:29:28.997554 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 5 14:29:28.997600 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 5 14:29:28.997654 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 5 14:29:28.997702 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 5 14:29:28.997751 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 5 14:29:28.997802 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 5 14:29:28.997851 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 14:29:28.997901 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 5 14:29:28.997949 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 14:29:28.998001 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 5 14:29:28.998051 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 5 14:29:28.998101 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 5 14:29:28.998158 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 5 14:29:28.998209 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 5 14:29:28.998256 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 5 14:29:28.998348 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 5 14:29:28.998397 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 5 14:29:28.998448 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 5 14:29:28.998499 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 5 14:29:28.998548 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 5 14:29:28.998595 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 5 14:29:28.998643 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 5 14:29:28.998690 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 5 14:29:28.998738 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 5 14:29:28.998790 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 5 14:29:28.998837 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 5 14:29:28.998892 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 5 14:29:28.998941 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.998995 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 5 14:29:28.999047 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999102 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 5 14:29:28.999151 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999204 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 5 14:29:28.999252 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999311 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 5 14:29:28.999401 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 5 14:29:28.999454 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 5 14:29:28.999504 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 14:29:28.999559 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 5 14:29:28.999612 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 5 14:29:28.999662 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 5 14:29:28.999711 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 5 14:29:28.999763 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 5 14:29:28.999811 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 5 14:29:28.999867 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 5 14:29:28.999918 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 5 14:29:28.999970 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 5 14:29:29.000019 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 5 14:29:29.000068 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 14:29:29.000118 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 14:29:29.000172 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 5 14:29:29.000222 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 5 14:29:29.000272 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 5 14:29:29.000367 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 5 14:29:29.000417 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 14:29:29.000466 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 14:29:29.000515 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 14:29:29.000563 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 14:29:29.000611 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 14:29:29.000659 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 14:29:29.000712 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 5 14:29:29.000765 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 5 14:29:29.000818 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 5 14:29:29.000915 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 5 14:29:29.000964 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 5 14:29:29.001014 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:29.001062 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 14:29:29.001111 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 14:29:29.001161 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 14:29:29.001218 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 5 14:29:29.001270 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 5 14:29:29.001349 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 5 14:29:29.001415 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 5 14:29:29.001467 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 5 14:29:29.001572 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 5 14:29:29.001621 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 14:29:29.001673 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 14:29:29.001721 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 14:29:29.001770 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 14:29:29.001825 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 5 14:29:29.001875 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 5 14:29:29.001925 kernel: pci 0000:06:00.0: supports D1 D2 Sep 5 14:29:29.001974 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 14:29:29.002026 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 14:29:29.002074 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.002122 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.002175 kernel: pci_bus 0000:07: extended config space not accessible Sep 5 14:29:29.002232 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 5 14:29:29.002286 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 5 14:29:29.002388 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 5 14:29:29.002442 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 5 14:29:29.002493 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 14:29:29.002545 kernel: pci 0000:07:00.0: supports D1 D2 Sep 5 14:29:29.002596 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 14:29:29.002646 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 14:29:29.002696 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.002745 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.002753 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 5 14:29:29.002761 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 5 14:29:29.002767 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 5 14:29:29.002772 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 5 14:29:29.002778 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 5 14:29:29.002784 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 5 14:29:29.002790 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 5 14:29:29.002796 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 5 14:29:29.002802 kernel: iommu: Default domain type: Translated Sep 5 14:29:29.002808 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 14:29:29.002815 kernel: PCI: Using ACPI for IRQ routing Sep 5 14:29:29.002821 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 14:29:29.002826 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 5 14:29:29.002832 kernel: e820: reserve RAM buffer [mem 0x81b28000-0x83ffffff] Sep 5 14:29:29.002838 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 5 14:29:29.002843 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 5 14:29:29.002849 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 5 14:29:29.002855 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 5 14:29:29.002906 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 5 14:29:29.002959 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 5 14:29:29.003011 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 14:29:29.003019 kernel: vgaarb: loaded Sep 5 14:29:29.003026 kernel: clocksource: Switched to clocksource tsc-early Sep 5 14:29:29.003031 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 14:29:29.003037 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 14:29:29.003043 kernel: pnp: PnP ACPI init Sep 5 14:29:29.003094 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 5 14:29:29.003146 kernel: pnp 00:02: [dma 0 disabled] Sep 5 14:29:29.003194 kernel: pnp 00:03: [dma 0 disabled] Sep 5 14:29:29.003242 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 5 14:29:29.003290 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 5 14:29:29.003338 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 5 14:29:29.003386 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 5 14:29:29.003432 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 5 14:29:29.003477 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 5 14:29:29.003521 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 5 14:29:29.003564 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 5 14:29:29.003607 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 5 14:29:29.003652 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 5 14:29:29.003696 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 5 14:29:29.003746 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 5 14:29:29.003791 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 5 14:29:29.003838 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 5 14:29:29.003882 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 5 14:29:29.003926 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 5 14:29:29.003970 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 5 14:29:29.004014 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 5 14:29:29.004065 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 5 14:29:29.004074 kernel: pnp: PnP ACPI: found 10 devices Sep 5 14:29:29.004080 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 14:29:29.004086 kernel: NET: Registered PF_INET protocol family Sep 5 14:29:29.004092 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 14:29:29.004098 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 5 14:29:29.004104 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 14:29:29.004109 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 14:29:29.004117 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 5 14:29:29.004123 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 5 14:29:29.004129 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 14:29:29.004134 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 14:29:29.004140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 14:29:29.004146 kernel: NET: Registered PF_XDP protocol family Sep 5 14:29:29.004194 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 5 14:29:29.004245 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 5 14:29:29.004297 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 5 14:29:29.004351 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004402 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004452 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004502 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 14:29:29.004552 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 14:29:29.004600 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 14:29:29.004649 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 14:29:29.004698 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 14:29:29.004749 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 14:29:29.004798 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 14:29:29.004849 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 14:29:29.004898 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 14:29:29.004948 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 14:29:29.004997 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 14:29:29.005045 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 14:29:29.005095 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 14:29:29.005144 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.005194 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.005242 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 14:29:29.005294 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 14:29:29.005344 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 14:29:29.005392 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 5 14:29:29.005436 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 14:29:29.005480 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 14:29:29.005523 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 14:29:29.005566 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 5 14:29:29.005610 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 5 14:29:29.005661 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 5 14:29:29.005707 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 14:29:29.005757 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 5 14:29:29.005803 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 5 14:29:29.005850 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 14:29:29.005895 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 5 14:29:29.005942 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 5 14:29:29.005988 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 5 14:29:29.006036 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 5 14:29:29.006083 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 5 14:29:29.006092 kernel: PCI: CLS 64 bytes, default 64 Sep 5 14:29:29.006098 kernel: DMAR: No ATSR found Sep 5 14:29:29.006104 kernel: DMAR: No SATC found Sep 5 14:29:29.006110 kernel: DMAR: dmar0: Using Queued invalidation Sep 5 14:29:29.006157 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 5 14:29:29.006207 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 5 14:29:29.006259 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 5 14:29:29.006311 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 5 14:29:29.006360 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 5 14:29:29.006408 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 5 14:29:29.006458 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 5 14:29:29.006506 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 5 14:29:29.006555 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 5 14:29:29.006604 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 5 14:29:29.006656 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 5 14:29:29.006703 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 5 14:29:29.006753 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 5 14:29:29.006801 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 5 14:29:29.006849 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 5 14:29:29.006897 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 5 14:29:29.006946 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 5 14:29:29.006993 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 5 14:29:29.007045 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 5 14:29:29.007094 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 5 14:29:29.007143 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 5 14:29:29.007193 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 5 14:29:29.007242 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 5 14:29:29.007296 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 5 14:29:29.007392 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 5 14:29:29.007443 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 5 14:29:29.007498 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 5 14:29:29.007506 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 5 14:29:29.007512 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 5 14:29:29.007518 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 5 14:29:29.007524 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 5 14:29:29.007530 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 5 14:29:29.007536 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 5 14:29:29.007542 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 5 14:29:29.007595 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 5 14:29:29.007606 kernel: Initialise system trusted keyrings Sep 5 14:29:29.007612 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 5 14:29:29.007617 kernel: Key type asymmetric registered Sep 5 14:29:29.007623 kernel: Asymmetric key parser 'x509' registered Sep 5 14:29:29.007629 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 14:29:29.007635 kernel: io scheduler mq-deadline registered Sep 5 14:29:29.007640 kernel: io scheduler kyber registered Sep 5 14:29:29.007646 kernel: io scheduler bfq registered Sep 5 14:29:29.007696 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 5 14:29:29.007745 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 5 14:29:29.007795 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 5 14:29:29.007844 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 5 14:29:29.007893 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 5 14:29:29.007941 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 5 14:29:29.007995 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 5 14:29:29.008006 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 5 14:29:29.008012 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 5 14:29:29.008018 kernel: pstore: Using crash dump compression: deflate Sep 5 14:29:29.008023 kernel: pstore: Registered erst as persistent store backend Sep 5 14:29:29.008029 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 14:29:29.008035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 14:29:29.008041 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 14:29:29.008047 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 5 14:29:29.008053 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 5 14:29:29.008103 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 5 14:29:29.008112 kernel: i8042: PNP: No PS/2 controller found. Sep 5 14:29:29.008157 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 5 14:29:29.008202 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 5 14:29:29.008247 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-09-05T14:29:27 UTC (1725546567) Sep 5 14:29:29.008293 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 5 14:29:29.008302 kernel: intel_pstate: Intel P-state driver initializing Sep 5 14:29:29.008307 kernel: intel_pstate: Disabling energy efficiency optimization Sep 5 14:29:29.008315 kernel: intel_pstate: HWP enabled Sep 5 14:29:29.008321 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 5 14:29:29.008327 kernel: vesafb: scrolling: redraw Sep 5 14:29:29.008332 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 5 14:29:29.008338 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000eef478d1, using 768k, total 768k Sep 5 14:29:29.008344 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 14:29:29.008350 kernel: fb0: VESA VGA frame buffer device Sep 5 14:29:29.008356 kernel: NET: Registered PF_INET6 protocol family Sep 5 14:29:29.008361 kernel: Segment Routing with IPv6 Sep 5 14:29:29.008368 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 14:29:29.008374 kernel: NET: Registered PF_PACKET protocol family Sep 5 14:29:29.008379 kernel: Key type dns_resolver registered Sep 5 14:29:29.008385 kernel: microcode: Microcode Update Driver: v2.2. Sep 5 14:29:29.008391 kernel: IPI shorthand broadcast: enabled Sep 5 14:29:29.008397 kernel: sched_clock: Marking stable (2477000774, 1380768469)->(4395427857, -537658614) Sep 5 14:29:29.008403 kernel: registered taskstats version 1 Sep 5 14:29:29.008408 kernel: Loading compiled-in X.509 certificates Sep 5 14:29:29.008414 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 5 14:29:29.008421 kernel: Key type .fscrypt registered Sep 5 14:29:29.008427 kernel: Key type fscrypt-provisioning registered Sep 5 14:29:29.008432 kernel: ima: Allocated hash algorithm: sha1 Sep 5 14:29:29.008438 kernel: ima: No architecture policies found Sep 5 14:29:29.008444 kernel: clk: Disabling unused clocks Sep 5 14:29:29.008450 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 5 14:29:29.008456 kernel: Write protecting the kernel read-only data: 36864k Sep 5 14:29:29.008461 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 5 14:29:29.008468 kernel: Run /init as init process Sep 5 14:29:29.008474 kernel: with arguments: Sep 5 14:29:29.008480 kernel: /init Sep 5 14:29:29.008486 kernel: with environment: Sep 5 14:29:29.008491 kernel: HOME=/ Sep 5 14:29:29.008497 kernel: TERM=linux Sep 5 14:29:29.008503 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 14:29:29.008510 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 14:29:29.008517 systemd[1]: Detected architecture x86-64. Sep 5 14:29:29.008524 systemd[1]: Running in initrd. Sep 5 14:29:29.008530 systemd[1]: No hostname configured, using default hostname. Sep 5 14:29:29.008536 systemd[1]: Hostname set to . Sep 5 14:29:29.008542 systemd[1]: Initializing machine ID from random generator. Sep 5 14:29:29.008548 systemd[1]: Queued start job for default target initrd.target. Sep 5 14:29:29.008554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 14:29:29.008560 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 14:29:29.008568 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 14:29:29.008574 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 14:29:29.008580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-ROOT.device - /dev/disk/by-partlabel/ROOT... Sep 5 14:29:29.008586 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 14:29:29.008593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 14:29:29.008599 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 14:29:29.008605 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Sep 5 14:29:29.008612 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Sep 5 14:29:29.008618 kernel: clocksource: Switched to clocksource tsc Sep 5 14:29:29.008624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 14:29:29.008630 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 14:29:29.008636 systemd[1]: Reached target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 14:29:29.008642 systemd[1]: Reached target paths.target - Path Units. Sep 5 14:29:29.008648 systemd[1]: Reached target slices.target - Slice Units. Sep 5 14:29:29.008654 systemd[1]: Reached target swap.target - Swaps. Sep 5 14:29:29.008660 systemd[1]: Reached target timers.target - Timer Units. Sep 5 14:29:29.008668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 14:29:29.008674 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 14:29:29.008680 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 14:29:29.008686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 14:29:29.008692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 14:29:29.008698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 14:29:29.008704 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 14:29:29.008710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 14:29:29.008717 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 14:29:29.008723 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 14:29:29.008729 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 14:29:29.008746 systemd-journald[261]: Collecting audit messages is disabled. Sep 5 14:29:29.008761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 14:29:29.008768 systemd-journald[261]: Journal started Sep 5 14:29:29.008781 systemd-journald[261]: Runtime Journal (/run/log/journal/ed8af417aa40402cbd4319852170d6c1) is 8.0M, max 639.9M, 631.9M free. Sep 5 14:29:29.015367 systemd-modules-load[262]: Inserted module 'overlay' Sep 5 14:29:29.044297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 14:29:29.087324 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 14:29:29.087343 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 14:29:29.106336 kernel: Bridge firewalling registered Sep 5 14:29:29.106352 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 14:29:29.123763 systemd-modules-load[262]: Inserted module 'br_netfilter' Sep 5 14:29:29.142608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 14:29:29.172542 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 14:29:29.180570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 14:29:29.198738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:29.235597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 14:29:29.236020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 14:29:29.236424 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 14:29:29.236845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 14:29:29.241295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 14:29:29.242027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 14:29:29.242143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 14:29:29.243070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 14:29:29.247651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:29.259641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 14:29:29.300697 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 14:29:29.328644 dracut-cmdline[296]: dracut-dracut-053 Sep 5 14:29:29.336519 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 5 14:29:29.533324 kernel: SCSI subsystem initialized Sep 5 14:29:29.556317 kernel: Loading iSCSI transport class v2.0-870. Sep 5 14:29:29.579290 kernel: iscsi: registered transport (tcp) Sep 5 14:29:29.611687 kernel: iscsi: registered transport (qla4xxx) Sep 5 14:29:29.611703 kernel: QLogic iSCSI HBA Driver Sep 5 14:29:29.644877 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 14:29:29.664556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 14:29:29.756918 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 14:29:29.756953 kernel: device-mapper: uevent: version 1.0.3 Sep 5 14:29:29.768316 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 14:29:29.836363 kernel: raid6: avx2x4 gen() 52429 MB/s Sep 5 14:29:29.868370 kernel: raid6: avx2x2 gen() 53380 MB/s Sep 5 14:29:29.905149 kernel: raid6: avx2x1 gen() 45235 MB/s Sep 5 14:29:29.905165 kernel: raid6: using algorithm avx2x2 gen() 53380 MB/s Sep 5 14:29:29.953065 kernel: raid6: .... xor() 31440 MB/s, rmw enabled Sep 5 14:29:29.953082 kernel: raid6: using avx2x2 recovery algorithm Sep 5 14:29:29.994314 kernel: xor: automatically using best checksumming function avx Sep 5 14:29:30.112321 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 14:29:30.117910 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 14:29:30.148625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 14:29:30.155455 systemd-udevd[482]: Using default interface naming scheme 'v255'. Sep 5 14:29:30.157962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 14:29:30.185445 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 14:29:30.237465 dracut-pre-trigger[494]: rd.md=0: removing MD RAID activation Sep 5 14:29:30.255040 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 14:29:30.275626 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 14:29:30.335829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 14:29:30.378463 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 14:29:30.378482 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 14:29:30.357628 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 14:29:30.409338 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 14:29:30.409354 kernel: ACPI: bus type USB registered Sep 5 14:29:30.357661 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:30.437069 kernel: usbcore: registered new interface driver usbfs Sep 5 14:29:30.437084 kernel: usbcore: registered new interface driver hub Sep 5 14:29:30.416133 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 14:29:30.476306 kernel: usbcore: registered new device driver usb Sep 5 14:29:30.476332 kernel: PTP clock support registered Sep 5 14:29:30.476350 kernel: libata version 3.00 loaded. Sep 5 14:29:30.466556 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 14:29:30.507826 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 14:29:30.507845 kernel: AES CTR mode by8 optimization enabled Sep 5 14:29:30.471751 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:31.300415 kernel: ahci 0000:00:17.0: version 3.0 Sep 5 14:29:31.300515 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 14:29:31.300582 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 5 14:29:31.300644 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 5 14:29:31.300704 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 5 14:29:31.300767 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 5 14:29:31.300827 kernel: scsi host0: ahci Sep 5 14:29:31.300891 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 14:29:31.300951 kernel: scsi host1: ahci Sep 5 14:29:31.301020 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 5 14:29:31.301080 kernel: scsi host2: ahci Sep 5 14:29:31.301138 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 5 14:29:31.301200 kernel: scsi host3: ahci Sep 5 14:29:31.301259 kernel: hub 1-0:1.0: USB hub found Sep 5 14:29:31.301334 kernel: scsi host4: ahci Sep 5 14:29:31.301395 kernel: hub 1-0:1.0: 16 ports detected Sep 5 14:29:31.301460 kernel: scsi host5: ahci Sep 5 14:29:31.301522 kernel: hub 2-0:1.0: USB hub found Sep 5 14:29:31.301592 kernel: scsi host6: ahci Sep 5 14:29:31.301651 kernel: hub 2-0:1.0: 10 ports detected Sep 5 14:29:31.301716 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 5 14:29:31.301724 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 5 14:29:31.301731 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 5 14:29:31.301738 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 5 14:29:31.301745 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 5 14:29:31.301752 kernel: pps pps0: new PPS source ptp0 Sep 5 14:29:31.301813 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 5 14:29:31.301822 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 5 14:29:31.301890 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 5 14:29:31.301898 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 14:29:31.301959 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 5 14:29:31.301968 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:24 Sep 5 14:29:31.302027 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 5 14:29:31.302037 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 5 14:29:31.302098 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 5 14:29:31.302193 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 14:29:31.302257 kernel: hub 1-14:1.0: USB hub found Sep 5 14:29:31.302333 kernel: pps pps1: new PPS source ptp1 Sep 5 14:29:31.302394 kernel: hub 1-14:1.0: 4 ports detected Sep 5 14:29:31.302462 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 5 14:29:31.302529 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 14:29:31.302590 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:25 Sep 5 14:29:31.302651 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 5 14:29:31.302711 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302719 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 14:29:31.302780 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302788 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 14:29:31.302795 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302803 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:31.302810 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 14:29:31.302817 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 14:29:31.302824 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 14:29:30.507438 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 14:29:31.371398 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 14:29:31.371410 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Sep 5 14:29:31.371491 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 14:29:31.371499 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 14:29:31.371565 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 14:29:31.371573 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 5 14:29:30.859413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 14:29:31.457799 kernel: ata1.00: Features: NCQ-prio Sep 5 14:29:31.457812 kernel: ata2.00: Features: NCQ-prio Sep 5 14:29:31.457819 kernel: ata1.00: configured for UDMA/133 Sep 5 14:29:31.472290 kernel: ata2.00: configured for UDMA/133 Sep 5 14:29:31.472306 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 14:29:31.485339 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 14:29:31.485372 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 14:29:31.557109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:31.589467 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 5 14:29:31.589599 kernel: usbcore: registered new interface driver usbhid Sep 5 14:29:31.589614 kernel: usbhid: USB HID core driver Sep 5 14:29:31.603333 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 5 14:29:31.603351 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 5 14:29:31.629344 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 14:29:31.660336 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 5 14:29:31.678615 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 5 14:29:31.678702 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 5 14:29:31.683522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 14:29:31.758019 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 5 14:29:31.766715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:31.802116 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 14:29:31.802141 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 14:29:31.802151 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 14:29:31.834277 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 14:29:31.834514 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 5 14:29:31.834684 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 5 14:29:31.834856 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 5 14:29:31.840343 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 5 14:29:31.849593 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 5 14:29:31.849679 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 5 14:29:31.849744 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 14:29:31.867415 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 14:29:31.867513 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 5 14:29:31.887815 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 5 14:29:31.888340 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 14:29:32.013746 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 14:29:32.034348 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 5 14:29:32.034430 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 14:29:32.064907 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 5 14:29:32.103187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 14:29:32.233366 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sdb3 scanned by (udev-worker) (528) Sep 5 14:29:32.233387 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 14:29:32.233518 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by (udev-worker) (577) Sep 5 14:29:32.233532 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Sep 5 14:29:32.233636 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 14:29:32.213228 systemd[1]: Found device dev-disk-by\x2dpartlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 14:29:32.256999 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 14:29:32.268365 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 14:29:32.282840 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 14:29:32.316384 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 14:29:32.342424 systemd[1]: Starting decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition... Sep 5 14:29:32.370734 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 14:29:32.381597 systemd[1]: Finished decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 14:29:32.417517 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 14:29:32.417560 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 14:29:32.435838 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 14:29:32.476279 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 14:29:32.536348 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 14:29:32.536452 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 5 14:29:32.505372 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 14:29:32.521389 systemd[1]: Reached target basic.target - Basic System. Sep 5 14:29:32.527440 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 14:29:32.558409 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 14:29:32.606422 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 5 14:29:32.606440 sh[707]: Success Sep 5 14:29:32.596866 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 14:29:32.618058 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 14:29:32.621435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 14:29:32.652561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 14:29:32.682501 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 14:29:32.732290 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 14:29:32.733500 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 14:29:32.756545 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 5 14:29:32.734534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 14:29:32.788364 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 5 14:29:32.776540 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 14:29:32.814508 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 14:29:32.823944 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 14:29:32.953528 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 5 14:29:32.953543 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 14:29:32.953551 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 14:29:32.953559 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 14:29:32.953567 kernel: BTRFS info (device dm-0): using free space tree Sep 5 14:29:32.953574 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 14:29:32.953859 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 14:29:32.995988 systemd-fsck[758]: ROOT: clean, 85/553520 files, 82898/553472 blocks Sep 5 14:29:33.006859 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 14:29:33.030479 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 14:29:33.136356 kernel: EXT4-fs (sdb9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 5 14:29:33.136875 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 14:29:33.152600 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 14:29:33.170361 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 14:29:33.190076 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 14:29:33.288739 kernel: BTRFS info (device sdb6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 5 14:29:33.288757 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 5 14:29:33.288766 kernel: BTRFS info (device sdb6): using free space tree Sep 5 14:29:33.288773 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 5 14:29:33.288780 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 5 14:29:33.278460 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 14:29:33.302018 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 14:29:33.332530 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 14:29:33.396502 initrd-setup-root[790]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 14:29:33.407416 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory Sep 5 14:29:33.417405 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 14:29:33.427386 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 14:29:33.496309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 14:29:33.521534 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 14:29:33.533731 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 14:29:33.556848 systemd[1]: Reached target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 14:29:33.592683 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 14:29:33.592683 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 14:29:33.629456 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 14:29:33.596650 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 14:29:33.656517 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 14:29:33.656569 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 14:29:33.674688 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 14:29:33.685518 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 14:29:33.713576 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 14:29:33.730605 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 14:29:33.798352 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 14:29:33.825703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 14:29:33.850270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 14:29:33.850717 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 14:29:33.877827 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 14:29:33.878187 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 14:29:33.906049 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 14:29:33.927869 systemd[1]: Stopped target basic.target - Basic System. Sep 5 14:29:33.946866 systemd[1]: Stopped target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. Sep 5 14:29:33.965853 systemd[1]: Stopped target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. Sep 5 14:29:33.989861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 14:29:34.013982 systemd[1]: Stopped target paths.target - Path Units. Sep 5 14:29:34.031876 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 14:29:34.050976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 14:29:34.071866 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 14:29:34.093978 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 14:29:34.113993 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 14:29:34.132979 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 14:29:34.152861 systemd[1]: Stopped target swap.target - Swaps. Sep 5 14:29:34.171938 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 14:29:34.172209 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 14:29:34.189004 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 14:29:34.189268 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 14:29:34.208850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 14:29:34.209192 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 14:29:34.234962 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 14:29:34.254749 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 14:29:34.255193 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 14:29:34.273874 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 14:29:34.294750 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 14:29:34.295155 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 14:29:34.315763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 14:29:34.316120 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 14:29:34.348805 systemd[1]: decrypt-root.service: Deactivated successfully. Sep 5 14:29:34.349206 systemd[1]: Stopped decrypt-root.service - Generate and execute a systemd-cryptsetup service to decrypt the ROOT partition. Sep 5 14:29:34.371938 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 14:29:34.372275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 14:29:34.390916 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 14:29:34.391273 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 14:29:34.414933 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 14:29:34.415270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 14:29:34.432916 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 14:29:34.433252 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 14:29:34.452909 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 14:29:34.453249 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 14:29:34.470926 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 14:29:34.471265 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 14:29:34.495919 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 14:29:34.496262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 14:29:34.516933 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 14:29:34.517271 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 14:29:34.548339 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 14:29:34.578824 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 14:29:34.578914 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 14:29:34.600463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 14:29:34.600561 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 14:29:34.619498 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 14:29:34.619645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 14:29:34.638745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 14:29:34.638859 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 14:29:34.658587 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 14:29:34.658738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 14:29:34.688844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 14:29:34.689015 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 14:29:34.720836 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 14:29:34.721005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 14:29:34.763365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 14:29:34.782447 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 14:29:35.031491 systemd-journald[261]: Received SIGTERM from PID 1 (systemd). Sep 5 14:29:34.782537 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 14:29:34.801583 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 14:29:34.801653 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 14:29:34.823640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 14:29:34.823758 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 14:29:34.842707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 14:29:34.842865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:34.867749 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 14:29:34.868021 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 14:29:34.889661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 14:29:34.917683 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 14:29:34.973138 systemd[1]: Switching root. Sep 5 14:29:35.031755 systemd-journald[261]: Journal stopped Sep 5 14:29:37.599018 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 14:29:37.599033 kernel: SELinux: policy capability open_perms=1 Sep 5 14:29:37.599040 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 14:29:37.599046 kernel: SELinux: policy capability always_check_network=0 Sep 5 14:29:37.599051 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 14:29:37.599056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 14:29:37.599063 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 14:29:37.599068 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 14:29:37.599073 kernel: audit: type=1403 audit(1725546575.221:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 14:29:37.599080 systemd[1]: Successfully loaded SELinux policy in 153.747ms. Sep 5 14:29:37.599088 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.910ms. Sep 5 14:29:37.599095 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 14:29:37.599101 systemd[1]: Detected architecture x86-64. Sep 5 14:29:37.599107 systemd[1]: Detected first boot. Sep 5 14:29:37.599113 systemd[1]: Hostname set to . Sep 5 14:29:37.599121 systemd[1]: Initializing machine ID from random generator. Sep 5 14:29:37.599128 zram_generator::config[1001]: No configuration found. Sep 5 14:29:37.599134 systemd[1]: Populated /etc with preset unit settings. Sep 5 14:29:37.599141 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 14:29:37.599147 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 14:29:37.599153 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 14:29:37.599160 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 14:29:37.599167 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 14:29:37.599174 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 14:29:37.599180 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 14:29:37.599187 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 14:29:37.599193 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 14:29:37.599200 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 14:29:37.599206 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 14:29:37.599214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 14:29:37.599220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 14:29:37.599227 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 14:29:37.599233 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 14:29:37.599240 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 14:29:37.599246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 14:29:37.599253 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Sep 5 14:29:37.599259 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 14:29:37.599267 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 14:29:37.599273 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 14:29:37.599280 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 14:29:37.599292 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 14:29:37.599300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 14:29:37.599307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 14:29:37.599313 systemd[1]: Reached target slices.target - Slice Units. Sep 5 14:29:37.599321 systemd[1]: Reached target swap.target - Swaps. Sep 5 14:29:37.599328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 14:29:37.599334 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 14:29:37.599341 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 14:29:37.599348 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 14:29:37.599354 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 14:29:37.599362 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 14:29:37.599369 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 14:29:37.599376 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 14:29:37.599382 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 14:29:37.599389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:37.599396 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 14:29:37.599403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 14:29:37.599411 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 14:29:37.599418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 14:29:37.599425 systemd[1]: Reached target machines.target - Containers. Sep 5 14:29:37.599431 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 14:29:37.599438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 14:29:37.599445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 14:29:37.599452 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 14:29:37.599459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 14:29:37.599465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 14:29:37.599473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 14:29:37.599480 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 14:29:37.599487 kernel: ACPI: bus type drm_connector registered Sep 5 14:29:37.599493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 14:29:37.599500 kernel: fuse: init (API version 7.39) Sep 5 14:29:37.599508 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 14:29:37.599515 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 14:29:37.599522 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 14:29:37.599529 kernel: loop: module loaded Sep 5 14:29:37.599535 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 14:29:37.599542 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 14:29:37.599549 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 14:29:37.599564 systemd-journald[1101]: Collecting audit messages is disabled. Sep 5 14:29:37.599579 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 14:29:37.599587 systemd-journald[1101]: Journal started Sep 5 14:29:37.599601 systemd-journald[1101]: Runtime Journal (/run/log/journal/3cc18d0c1ec7459f85c1acc0a4cd72ac) is 8.0M, max 639.9M, 631.9M free. Sep 5 14:29:35.735750 systemd[1]: Queued start job for default target multi-user.target. Sep 5 14:29:35.751765 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Sep 5 14:29:35.752001 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 14:29:37.642478 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 14:29:37.684292 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 14:29:37.717353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 14:29:37.752888 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 14:29:37.752951 systemd[1]: Stopped verity-setup.service. Sep 5 14:29:37.816327 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:37.837476 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 14:29:37.846865 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 14:29:37.856548 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 14:29:37.866552 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 14:29:37.876548 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 14:29:37.886520 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 14:29:37.896503 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 14:29:37.906662 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 14:29:37.917721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 14:29:37.928914 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 14:29:37.929131 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 14:29:37.941226 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 14:29:37.941610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 14:29:37.953109 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 14:29:37.953488 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 14:29:37.964244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 14:29:37.964604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 14:29:37.976109 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 14:29:37.976483 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 14:29:37.987132 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 14:29:37.987506 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 14:29:37.998134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 14:29:38.008164 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 14:29:38.020125 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 14:29:38.032115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 14:29:38.066168 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 14:29:38.091656 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 14:29:38.104048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 14:29:38.113552 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 14:29:38.113669 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 14:29:38.126151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 14:29:38.154733 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 14:29:38.167939 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 14:29:38.177759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 14:29:38.180015 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 14:29:38.190878 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 14:29:38.201406 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 14:29:38.210725 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 14:29:38.217017 systemd-journald[1101]: Time spent on flushing to /var/log/journal/3cc18d0c1ec7459f85c1acc0a4cd72ac is 12.620ms for 1139 entries. Sep 5 14:29:38.217017 systemd-journald[1101]: System Journal (/var/log/journal/3cc18d0c1ec7459f85c1acc0a4cd72ac) is 8.0M, max 195.6M, 187.6M free. Sep 5 14:29:38.263442 systemd-journald[1101]: Received client request to flush runtime journal. Sep 5 14:29:38.229390 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 14:29:38.229993 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 14:29:38.239889 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 14:29:38.249033 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 14:29:38.261035 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 14:29:38.288291 kernel: loop0: detected capacity change from 0 to 140728 Sep 5 14:29:38.300524 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 14:29:38.302927 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Sep 5 14:29:38.302936 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Sep 5 14:29:38.312464 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 14:29:38.330514 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 14:29:38.336290 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 14:29:38.348526 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 14:29:38.359448 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 14:29:38.370504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 14:29:38.395290 kernel: loop1: detected capacity change from 0 to 8 Sep 5 14:29:38.399538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 14:29:38.413316 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 14:29:38.450295 kernel: loop2: detected capacity change from 0 to 209816 Sep 5 14:29:38.451563 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 14:29:38.463197 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 14:29:38.472933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 14:29:38.473399 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 14:29:38.484810 udevadm[1137]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 14:29:38.494238 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 14:29:38.516435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 14:29:38.538346 kernel: loop3: detected capacity change from 0 to 89336 Sep 5 14:29:38.543978 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Sep 5 14:29:38.543989 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Sep 5 14:29:38.546355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 14:29:38.554942 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 14:29:38.557634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 14:29:38.603293 kernel: loop4: detected capacity change from 0 to 140728 Sep 5 14:29:38.622354 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 14:29:38.633349 kernel: loop5: detected capacity change from 0 to 8 Sep 5 14:29:38.653291 kernel: loop6: detected capacity change from 0 to 209816 Sep 5 14:29:38.666461 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 14:29:38.686290 kernel: loop7: detected capacity change from 0 to 89336 Sep 5 14:29:38.693340 (sd-merge)[1160]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Sep 5 14:29:38.693591 (sd-merge)[1160]: Merged extensions into '/usr'. Sep 5 14:29:38.694564 systemd-udevd[1162]: Using default interface naming scheme 'v255'. Sep 5 14:29:38.695642 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 14:29:38.695648 systemd[1]: Reloading... Sep 5 14:29:38.729303 zram_generator::config[1186]: No configuration found. Sep 5 14:29:38.749566 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 5 14:29:38.749624 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1263) Sep 5 14:29:38.749636 kernel: ACPI: button: Sleep Button [SLPB] Sep 5 14:29:38.764099 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1207) Sep 5 14:29:38.764132 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 14:29:38.772292 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1207) Sep 5 14:29:38.806453 kernel: ACPI: button: Power Button [PWRF] Sep 5 14:29:38.880295 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 14:29:38.880355 kernel: IPMI message handler: version 39.2 Sep 5 14:29:38.913256 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 5 14:29:38.931941 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 5 14:29:38.959891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 14:29:38.962320 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 5 14:29:38.981291 kernel: iTCO_vendor_support: vendor-support=0 Sep 5 14:29:38.981317 kernel: ipmi device interface Sep 5 14:29:39.020304 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 5 14:29:39.020509 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 5 14:29:39.022446 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 5 14:29:39.022537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 14:29:39.050292 kernel: ipmi_si: IPMI System Interface driver Sep 5 14:29:39.065703 systemd[1]: Reloading finished in 369 ms. Sep 5 14:29:39.073627 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 5 14:29:39.092063 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 5 14:29:39.108955 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 5 14:29:39.125768 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 5 14:29:39.137290 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 5 14:29:39.180407 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 5 14:29:39.180498 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 5 14:29:39.200889 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 5 14:29:39.240342 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Sep 5 14:29:39.240453 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Sep 5 14:29:39.251315 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 5 14:29:39.312293 kernel: intel_rapl_common: Found RAPL domain package Sep 5 14:29:39.312335 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Sep 5 14:29:39.312435 kernel: intel_rapl_common: Found RAPL domain core Sep 5 14:29:39.312447 kernel: intel_rapl_common: Found RAPL domain dram Sep 5 14:29:39.381185 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 14:29:39.392522 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 14:29:39.440292 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 5 14:29:39.440474 systemd[1]: Starting ensure-sysext.service... Sep 5 14:29:39.452956 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 14:29:39.457291 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 5 14:29:39.470260 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 14:29:39.489359 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 14:29:39.497483 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 14:29:39.497684 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 14:29:39.498170 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 14:29:39.498341 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 14:29:39.498375 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 14:29:39.499984 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 14:29:39.499989 systemd-tmpfiles[1340]: Skipping /boot Sep 5 14:29:39.502049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 14:29:39.503916 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 14:29:39.503919 systemd-tmpfiles[1340]: Skipping /boot Sep 5 14:29:39.512615 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 14:29:39.523584 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 14:29:39.523789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 14:29:39.527291 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 14:29:39.528136 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 14:29:39.528664 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 14:29:39.529369 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 14:29:39.530333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 14:29:39.530964 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 14:29:39.531809 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 14:29:39.532361 systemd[1]: Reloading requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 5 14:29:39.532368 systemd[1]: Reloading... Sep 5 14:29:39.537916 lvm[1351]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 14:29:39.554142 augenrules[1374]: No rules Sep 5 14:29:39.578291 zram_generator::config[1407]: No configuration found. Sep 5 14:29:39.604197 systemd-networkd[1338]: lo: Link UP Sep 5 14:29:39.604200 systemd-networkd[1338]: lo: Gained carrier Sep 5 14:29:39.606628 systemd-networkd[1338]: bond0: netdev ready Sep 5 14:29:39.607001 systemd-networkd[1338]: Enumeration completed Sep 5 14:29:39.609325 systemd-networkd[1338]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:35:08.network. Sep 5 14:29:39.639312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 14:29:39.694491 systemd[1]: Reloading finished in 161 ms. Sep 5 14:29:39.709604 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 14:29:39.719454 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 14:29:39.735556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 14:29:39.746556 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 14:29:39.756507 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 14:29:39.767490 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 14:29:39.778495 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 14:29:39.794688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 14:29:39.805195 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 14:29:39.817075 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 14:29:39.818998 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 14:29:39.829058 systemd-resolved[1353]: Positive Trust Anchors: Sep 5 14:29:39.829065 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 14:29:39.829090 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 14:29:39.829202 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 14:29:39.831741 systemd-resolved[1353]: Using system hostname 'ci-4054.1.0-a-f4c57b7dbd'. Sep 5 14:29:39.842489 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 14:29:39.843312 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 14:29:39.854767 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 14:29:39.866788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:39.866915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 14:29:39.867713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 14:29:39.882418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 14:29:39.892292 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 5 14:29:39.913190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 14:29:39.916589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 14:29:39.916687 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 14:29:39.916756 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:39.917293 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Sep 5 14:29:39.917482 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 14:29:39.919762 systemd-networkd[1338]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:35:09.network. Sep 5 14:29:39.938705 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 14:29:39.948727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 14:29:39.948831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 14:29:39.960825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 14:29:39.960950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 14:29:39.971905 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 14:29:39.972072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 14:29:39.992622 systemd[1]: Reached target network.target - Network. Sep 5 14:29:40.000787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 14:29:40.011709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:40.012254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 14:29:40.028897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 14:29:40.041168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 14:29:40.053106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 14:29:40.062502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 14:29:40.062589 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 14:29:40.062650 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:40.063430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 14:29:40.063529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 14:29:40.074825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 14:29:40.074934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 14:29:40.086883 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 14:29:40.087031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 14:29:40.103945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:40.104417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 14:29:40.127887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 14:29:40.145035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 14:29:40.157400 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 5 14:29:40.186083 systemd-networkd[1338]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 5 14:29:40.186297 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Sep 5 14:29:40.186738 systemd-networkd[1338]: enp1s0f0np0: Link UP Sep 5 14:29:40.189118 systemd-networkd[1338]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:35:08.network. Sep 5 14:29:40.189369 systemd-networkd[1338]: enp1s0f1np1: Link UP Sep 5 14:29:40.189619 systemd-networkd[1338]: bond0: Link UP Sep 5 14:29:40.207339 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 5 14:29:40.208030 systemd-networkd[1338]: enp1s0f0np0: Gained carrier Sep 5 14:29:40.213487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 14:29:40.219482 systemd-networkd[1338]: enp1s0f1np1: Gained carrier Sep 5 14:29:40.225142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 14:29:40.232467 systemd-networkd[1338]: bond0: Gained carrier Sep 5 14:29:40.235430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 14:29:40.235555 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 14:29:40.235642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 14:29:40.236642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 14:29:40.236761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 14:29:40.247951 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 14:29:40.248126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 14:29:40.258175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 14:29:40.258449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 14:29:40.269553 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 14:29:40.269620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 14:29:40.280305 systemd[1]: Finished ensure-sysext.service. Sep 5 14:29:40.289885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 14:29:40.289920 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 14:29:40.309339 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 5 14:29:40.309361 kernel: bond0: active interface up! Sep 5 14:29:40.325396 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 14:29:40.365749 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 14:29:40.377438 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 14:29:40.387409 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 14:29:40.398383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 14:29:40.409371 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 14:29:40.428340 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 14:29:40.428358 systemd[1]: Reached target paths.target - Path Units. Sep 5 14:29:40.438352 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 5 14:29:40.445374 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 14:29:40.455439 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 14:29:40.465416 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 14:29:40.476361 systemd[1]: Reached target timers.target - Timer Units. Sep 5 14:29:40.484832 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 14:29:40.495064 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 14:29:40.505082 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 14:29:40.514690 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 14:29:40.524453 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 14:29:40.534385 systemd[1]: Reached target basic.target - Basic System. Sep 5 14:29:40.542406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 14:29:40.542421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 14:29:40.554382 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 14:29:40.565059 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 14:29:40.574930 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 14:29:40.583925 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 14:29:40.594019 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 14:29:40.596331 jq[1506]: false Sep 5 14:29:40.600037 dbus-daemon[1505]: [system] SELinux support is enabled Sep 5 14:29:40.600888 coreos-metadata[1504]: Sep 05 14:29:40.600 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 14:29:40.603424 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 14:29:40.604063 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 14:29:40.611428 extend-filesystems[1509]: Found loop4 Sep 5 14:29:40.611428 extend-filesystems[1509]: Found loop5 Sep 5 14:29:40.668349 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 5 14:29:40.668389 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (1264) Sep 5 14:29:40.668400 extend-filesystems[1509]: Found loop6 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found loop7 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sda Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb1 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb2 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb3 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found usr Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb4 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb6 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb7 Sep 5 14:29:40.668400 extend-filesystems[1509]: Found sdb9 Sep 5 14:29:40.668400 extend-filesystems[1509]: Checking size of /dev/sdb9 Sep 5 14:29:40.668400 extend-filesystems[1509]: Resized partition /dev/sdb9 Sep 5 14:29:40.614176 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 14:29:40.815598 extend-filesystems[1519]: resize2fs 1.47.1 (20-May-2024) Sep 5 14:29:40.683396 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 14:29:40.692982 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 14:29:40.736455 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 14:29:40.758231 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Sep 5 14:29:40.768202 systemd-logind[1530]: Watching system buttons on /dev/input/event3 (Power Button) Sep 5 14:29:40.840771 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 14:29:40.768211 systemd-logind[1530]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 5 14:29:40.840886 update_engine[1535]: I0905 14:29:40.807439 1535 main.cc:92] Flatcar Update Engine starting Sep 5 14:29:40.840886 update_engine[1535]: I0905 14:29:40.808221 1535 update_check_scheduler.cc:74] Next update check in 2m52s Sep 5 14:29:40.768222 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 5 14:29:40.841042 jq[1536]: true Sep 5 14:29:40.768371 systemd-logind[1530]: New seat seat0. Sep 5 14:29:40.779719 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 14:29:40.780062 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 14:29:40.787021 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 14:29:40.799681 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 14:29:40.832887 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 14:29:40.856515 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 14:29:40.856610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 14:29:40.856812 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 14:29:40.856898 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 14:29:40.866816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 14:29:40.866905 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 14:29:40.877584 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 14:29:40.890200 (ntainerd)[1548]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 14:29:40.891760 jq[1547]: false Sep 5 14:29:40.892929 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Sep 5 14:29:40.893059 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition being skipped. Sep 5 14:29:40.893721 dbus-daemon[1505]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 14:29:40.895399 tar[1545]: linux-amd64/helm Sep 5 14:29:40.902015 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 5 14:29:40.902148 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Sep 5 14:29:40.904530 systemd[1]: Started update-engine.service - Update Engine. Sep 5 14:29:40.929494 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 14:29:40.938168 systemd[1]: Starting sshkeys.service... Sep 5 14:29:40.945364 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 14:29:40.945523 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 14:29:40.956440 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 14:29:40.956519 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 14:29:40.976465 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 14:29:40.989877 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 14:29:40.990011 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 14:29:41.011974 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 14:29:41.015600 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 14:29:41.026662 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 14:29:41.040482 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 14:29:41.059600 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 14:29:41.070374 coreos-metadata[1583]: Sep 05 14:29:41.070 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 14:29:41.072241 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 14:29:41.072808 containerd[1548]: time="2024-09-05T14:29:41.072758663Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 5 14:29:41.082129 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 5 14:29:41.089873 containerd[1548]: time="2024-09-05T14:29:41.089820300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090661 containerd[1548]: time="2024-09-05T14:29:41.090614990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090661 containerd[1548]: time="2024-09-05T14:29:41.090630505Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 14:29:41.090661 containerd[1548]: time="2024-09-05T14:29:41.090639708Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 14:29:41.090732 containerd[1548]: time="2024-09-05T14:29:41.090724789Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 14:29:41.090748 containerd[1548]: time="2024-09-05T14:29:41.090735285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090807 containerd[1548]: time="2024-09-05T14:29:41.090767826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090807 containerd[1548]: time="2024-09-05T14:29:41.090776455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090917 containerd[1548]: time="2024-09-05T14:29:41.090883819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090917 containerd[1548]: time="2024-09-05T14:29:41.090893744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090917 containerd[1548]: time="2024-09-05T14:29:41.090901181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090917 containerd[1548]: time="2024-09-05T14:29:41.090906931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 14:29:41.090974 containerd[1548]: time="2024-09-05T14:29:41.090947065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 14:29:41.091087 containerd[1548]: time="2024-09-05T14:29:41.091054808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 14:29:41.091157 containerd[1548]: time="2024-09-05T14:29:41.091113524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 14:29:41.091157 containerd[1548]: time="2024-09-05T14:29:41.091123038Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 14:29:41.091191 containerd[1548]: time="2024-09-05T14:29:41.091171101Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 14:29:41.091205 containerd[1548]: time="2024-09-05T14:29:41.091199085Z" level=info msg="metadata content store policy set" policy=shared Sep 5 14:29:41.091503 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 14:29:41.113166 containerd[1548]: time="2024-09-05T14:29:41.113152857Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 14:29:41.113201 containerd[1548]: time="2024-09-05T14:29:41.113180636Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 14:29:41.113201 containerd[1548]: time="2024-09-05T14:29:41.113191709Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 14:29:41.113229 containerd[1548]: time="2024-09-05T14:29:41.113200623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 14:29:41.113229 containerd[1548]: time="2024-09-05T14:29:41.113208507Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 14:29:41.113288 containerd[1548]: time="2024-09-05T14:29:41.113277211Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 14:29:41.113425 containerd[1548]: time="2024-09-05T14:29:41.113417664Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 14:29:41.113489 containerd[1548]: time="2024-09-05T14:29:41.113482124Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 14:29:41.113509 containerd[1548]: time="2024-09-05T14:29:41.113492419Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 14:29:41.113509 containerd[1548]: time="2024-09-05T14:29:41.113500022Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 14:29:41.113509 containerd[1548]: time="2024-09-05T14:29:41.113507787Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113552 containerd[1548]: time="2024-09-05T14:29:41.113515717Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113552 containerd[1548]: time="2024-09-05T14:29:41.113522535Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113552 containerd[1548]: time="2024-09-05T14:29:41.113529841Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113552 containerd[1548]: time="2024-09-05T14:29:41.113538122Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113552 containerd[1548]: time="2024-09-05T14:29:41.113545794Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113552834Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113559837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113571589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113578722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113586682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113595186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113602358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113609726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113617 containerd[1548]: time="2024-09-05T14:29:41.113615902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113623209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113630134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113640358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113647653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113654246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113660936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113669547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113681122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113688711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113695024Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113719702Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 14:29:41.113740 containerd[1548]: time="2024-09-05T14:29:41.113733930Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 14:29:41.113897 containerd[1548]: time="2024-09-05T14:29:41.113740534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 14:29:41.113897 containerd[1548]: time="2024-09-05T14:29:41.113750886Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 14:29:41.113897 containerd[1548]: time="2024-09-05T14:29:41.113756419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113897 containerd[1548]: time="2024-09-05T14:29:41.113764725Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 14:29:41.113897 containerd[1548]: time="2024-09-05T14:29:41.113773534Z" level=info msg="NRI interface is disabled by configuration." Sep 5 14:29:41.113897 containerd[1548]: time="2024-09-05T14:29:41.113782503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 14:29:41.113980 containerd[1548]: time="2024-09-05T14:29:41.113954432Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 14:29:41.114056 containerd[1548]: time="2024-09-05T14:29:41.113990185Z" level=info msg="Connect containerd service" Sep 5 14:29:41.114056 containerd[1548]: time="2024-09-05T14:29:41.114003882Z" level=info msg="using legacy CRI server" Sep 5 14:29:41.114056 containerd[1548]: time="2024-09-05T14:29:41.114007806Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 14:29:41.114101 containerd[1548]: time="2024-09-05T14:29:41.114055867Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 14:29:41.114356 containerd[1548]: time="2024-09-05T14:29:41.114345465Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 14:29:41.114490 containerd[1548]: time="2024-09-05T14:29:41.114470432Z" level=info msg="Start subscribing containerd event" Sep 5 14:29:41.114511 containerd[1548]: time="2024-09-05T14:29:41.114499111Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 14:29:41.114527 containerd[1548]: time="2024-09-05T14:29:41.114500305Z" level=info msg="Start recovering state" Sep 5 14:29:41.114542 containerd[1548]: time="2024-09-05T14:29:41.114530891Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 14:29:41.114557 containerd[1548]: time="2024-09-05T14:29:41.114549953Z" level=info msg="Start event monitor" Sep 5 14:29:41.114573 containerd[1548]: time="2024-09-05T14:29:41.114557426Z" level=info msg="Start snapshots syncer" Sep 5 14:29:41.114573 containerd[1548]: time="2024-09-05T14:29:41.114562702Z" level=info msg="Start cni network conf syncer for default" Sep 5 14:29:41.114573 containerd[1548]: time="2024-09-05T14:29:41.114567282Z" level=info msg="Start streaming server" Sep 5 14:29:41.114615 containerd[1548]: time="2024-09-05T14:29:41.114602641Z" level=info msg="containerd successfully booted in 0.043175s" Sep 5 14:29:41.114636 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 14:29:41.204322 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 5 14:29:41.227164 extend-filesystems[1519]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 5 14:29:41.227164 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 5 14:29:41.227164 extend-filesystems[1519]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 5 14:29:41.271375 extend-filesystems[1509]: Resized filesystem in /dev/sdb9 Sep 5 14:29:41.271418 tar[1545]: linux-amd64/LICENSE Sep 5 14:29:41.271418 tar[1545]: linux-amd64/README.md Sep 5 14:29:41.228031 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 14:29:41.228131 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 14:29:41.281054 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 14:29:41.384396 systemd-networkd[1338]: bond0: Gained IPv6LL Sep 5 14:29:42.026335 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 14:29:42.038851 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 14:29:42.062435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:29:42.072993 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 14:29:42.090546 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 14:29:42.683061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:29:42.694842 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 14:29:42.846419 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Sep 5 14:29:42.846540 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Sep 5 14:29:43.218007 kubelet[1623]: E0905 14:29:43.217935 1623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 14:29:43.219182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 14:29:43.219258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 14:29:45.640745 systemd-timesyncd[1499]: Contacted time server 65.100.46.166:123 (0.flatcar.pool.ntp.org). Sep 5 14:29:45.640903 systemd-timesyncd[1499]: Initial clock synchronization to Thu 2024-09-05 14:29:45.509821 UTC. Sep 5 14:29:46.139731 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 14:29:46.140886 login[1593]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 14:29:46.145813 systemd-logind[1530]: New session 1 of user core. Sep 5 14:29:46.146633 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 14:29:46.165644 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 14:29:46.167297 systemd-logind[1530]: New session 2 of user core. Sep 5 14:29:46.172844 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 14:29:46.174425 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 14:29:46.179684 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 14:29:46.211587 coreos-metadata[1504]: Sep 05 14:29:46.211 INFO Fetch successful Sep 5 14:29:46.251803 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 14:29:46.253222 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Sep 5 14:29:46.253493 systemd[1651]: Queued start job for default target default.target. Sep 5 14:29:46.254009 systemd[1651]: Created slice app.slice - User Application Slice. Sep 5 14:29:46.254022 systemd[1651]: Reached target paths.target - Paths. Sep 5 14:29:46.254031 systemd[1651]: Reached target timers.target - Timers. Sep 5 14:29:46.254701 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 14:29:46.260184 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 14:29:46.260222 systemd[1651]: Reached target sockets.target - Sockets. Sep 5 14:29:46.260236 systemd[1651]: Reached target basic.target - Basic System. Sep 5 14:29:46.260263 systemd[1651]: Reached target default.target - Main User Target. Sep 5 14:29:46.260313 systemd[1651]: Startup finished in 76ms. Sep 5 14:29:46.260359 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 14:29:46.261241 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 14:29:46.261805 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 14:29:46.581945 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Sep 5 14:29:46.601553 coreos-metadata[1583]: Sep 05 14:29:46.601 INFO Fetch successful Sep 5 14:29:46.642165 unknown[1583]: wrote ssh authorized keys file for user: core Sep 5 14:29:46.667554 update-ssh-keys[1688]: Updated "/home/core/.ssh/authorized_keys" Sep 5 14:29:46.667844 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 14:29:46.668485 systemd[1]: Finished sshkeys.service. Sep 5 14:29:46.669496 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 14:29:46.669641 systemd[1]: Startup finished in 2.666s (kernel) + 7.222s (initrd) + 11.600s (userspace) = 21.489s. Sep 5 14:29:47.925844 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 14:29:47.937769 systemd[1]: Started sshd@0-147.75.90.7:22-139.178.89.65:58540.service - OpenSSH per-connection server daemon (139.178.89.65:58540). Sep 5 14:29:47.970799 sshd[1693]: Accepted publickey for core from 139.178.89.65 port 58540 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:29:47.972123 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:29:47.976937 systemd-logind[1530]: New session 3 of user core. Sep 5 14:29:47.991803 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 14:29:48.055316 systemd[1]: Started sshd@1-147.75.90.7:22-139.178.89.65:58550.service - OpenSSH per-connection server daemon (139.178.89.65:58550). Sep 5 14:29:48.083514 sshd[1698]: Accepted publickey for core from 139.178.89.65 port 58550 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:29:48.084133 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:29:48.086625 systemd-logind[1530]: New session 4 of user core. Sep 5 14:29:48.101447 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 14:29:48.150133 sshd[1698]: pam_unix(sshd:session): session closed for user core Sep 5 14:29:48.163067 systemd[1]: sshd@1-147.75.90.7:22-139.178.89.65:58550.service: Deactivated successfully. Sep 5 14:29:48.164674 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 14:29:48.166150 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Sep 5 14:29:48.167506 systemd[1]: Started sshd@2-147.75.90.7:22-139.178.89.65:58564.service - OpenSSH per-connection server daemon (139.178.89.65:58564). Sep 5 14:29:48.168662 systemd-logind[1530]: Removed session 4. Sep 5 14:29:48.199996 sshd[1705]: Accepted publickey for core from 139.178.89.65 port 58564 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:29:48.203258 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:29:48.214263 systemd-logind[1530]: New session 5 of user core. Sep 5 14:29:48.224710 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 14:29:48.284851 sshd[1705]: pam_unix(sshd:session): session closed for user core Sep 5 14:29:48.301950 systemd[1]: sshd@2-147.75.90.7:22-139.178.89.65:58564.service: Deactivated successfully. Sep 5 14:29:48.305556 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 14:29:48.308872 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Sep 5 14:29:48.323467 systemd[1]: Started sshd@3-147.75.90.7:22-139.178.89.65:58568.service - OpenSSH per-connection server daemon (139.178.89.65:58568). Sep 5 14:29:48.324015 systemd-logind[1530]: Removed session 5. Sep 5 14:29:48.350600 sshd[1713]: Accepted publickey for core from 139.178.89.65 port 58568 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:29:48.351524 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:29:48.354714 systemd-logind[1530]: New session 6 of user core. Sep 5 14:29:48.367530 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 14:29:48.422259 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 5 14:29:48.439052 systemd[1]: sshd@3-147.75.90.7:22-139.178.89.65:58568.service: Deactivated successfully. Sep 5 14:29:48.439907 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 14:29:48.440645 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Sep 5 14:29:48.441356 systemd[1]: Started sshd@4-147.75.90.7:22-139.178.89.65:58584.service - OpenSSH per-connection server daemon (139.178.89.65:58584). Sep 5 14:29:48.441988 systemd-logind[1530]: Removed session 6. Sep 5 14:29:48.475874 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 58584 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:29:48.477557 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:29:48.484655 systemd-logind[1530]: New session 7 of user core. Sep 5 14:29:48.494709 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 14:29:48.565667 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 14:29:48.565816 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 14:29:48.585127 sudo[1724]: pam_unix(sudo:session): session closed for user root Sep 5 14:29:48.586376 sshd[1721]: pam_unix(sshd:session): session closed for user core Sep 5 14:29:48.598364 systemd[1]: sshd@4-147.75.90.7:22-139.178.89.65:58584.service: Deactivated successfully. Sep 5 14:29:48.599408 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 14:29:48.600462 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Sep 5 14:29:48.601413 systemd[1]: Started sshd@5-147.75.90.7:22-139.178.89.65:58590.service - OpenSSH per-connection server daemon (139.178.89.65:58590). Sep 5 14:29:48.602283 systemd-logind[1530]: Removed session 7. Sep 5 14:29:48.642269 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 58590 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:29:48.643919 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:29:48.649261 systemd-logind[1530]: New session 8 of user core. Sep 5 14:29:48.667826 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 14:29:48.727032 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 14:29:48.727201 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 14:29:48.729189 sudo[1733]: pam_unix(sudo:session): session closed for user root Sep 5 14:29:48.731718 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 14:29:48.731862 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 14:29:48.750587 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 14:29:48.751810 auditctl[1736]: No rules Sep 5 14:29:48.752027 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 14:29:48.752167 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 14:29:48.753840 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 14:29:48.769818 augenrules[1754]: No rules Sep 5 14:29:48.770186 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 14:29:48.770727 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 5 14:29:48.771621 sshd[1729]: pam_unix(sshd:session): session closed for user core Sep 5 14:29:48.773656 systemd[1]: sshd@5-147.75.90.7:22-139.178.89.65:58590.service: Deactivated successfully. Sep 5 14:29:48.774439 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 14:29:48.774875 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Sep 5 14:29:48.775925 systemd[1]: Started sshd@6-147.75.90.7:22-139.178.89.65:58592.service - OpenSSH per-connection server daemon (139.178.89.65:58592). Sep 5 14:29:48.776436 systemd-logind[1530]: Removed session 8. Sep 5 14:29:48.807930 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 58592 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:29:48.808988 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:29:48.812568 systemd-logind[1530]: New session 9 of user core. Sep 5 14:29:48.823522 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 14:29:48.871170 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 14:29:48.871426 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 14:29:49.056642 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 14:29:49.056712 (dockerd)[1775]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 14:29:49.306874 dockerd[1775]: time="2024-09-05T14:29:49.306786288Z" level=info msg="Starting up" Sep 5 14:29:49.468818 dockerd[1775]: time="2024-09-05T14:29:49.468770811Z" level=info msg="Loading containers: start." Sep 5 14:29:49.562363 kernel: Initializing XFRM netlink socket Sep 5 14:29:49.632407 systemd-networkd[1338]: docker0: Link UP Sep 5 14:29:49.649170 dockerd[1775]: time="2024-09-05T14:29:49.649152517Z" level=info msg="Loading containers: done." Sep 5 14:29:49.658008 dockerd[1775]: time="2024-09-05T14:29:49.657985306Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 14:29:49.658078 dockerd[1775]: time="2024-09-05T14:29:49.658038223Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 14:29:49.658103 dockerd[1775]: time="2024-09-05T14:29:49.658088989Z" level=info msg="Daemon has completed initialization" Sep 5 14:29:49.658498 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2521024027-merged.mount: Deactivated successfully. Sep 5 14:29:49.674010 dockerd[1775]: time="2024-09-05T14:29:49.673954623Z" level=info msg="API listen on /run/docker.sock" Sep 5 14:29:49.674096 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 14:29:50.628355 containerd[1548]: time="2024-09-05T14:29:50.628319918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 5 14:29:51.226426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202177397.mount: Deactivated successfully. Sep 5 14:29:53.469572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 14:29:53.478468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:29:53.613094 containerd[1548]: time="2024-09-05T14:29:53.613064920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:53.657634 containerd[1548]: time="2024-09-05T14:29:53.657551059Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735" Sep 5 14:29:53.683527 containerd[1548]: time="2024-09-05T14:29:53.683501340Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:53.685479 containerd[1548]: time="2024-09-05T14:29:53.685436640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:53.686143 containerd[1548]: time="2024-09-05T14:29:53.686100568Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 3.057742407s" Sep 5 14:29:53.686143 containerd[1548]: time="2024-09-05T14:29:53.686119224Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 5 14:29:53.697763 containerd[1548]: time="2024-09-05T14:29:53.697742390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 5 14:29:53.699325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:29:53.701770 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 14:29:53.727355 kubelet[2042]: E0905 14:29:53.727234 2042 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 14:29:53.729824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 14:29:53.729908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 14:29:55.997676 containerd[1548]: time="2024-09-05T14:29:55.997649751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:55.997880 containerd[1548]: time="2024-09-05T14:29:55.997838060Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709" Sep 5 14:29:55.998309 containerd[1548]: time="2024-09-05T14:29:55.998298659Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:55.999831 containerd[1548]: time="2024-09-05T14:29:55.999816869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:56.000462 containerd[1548]: time="2024-09-05T14:29:56.000450850Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 2.302684212s" Sep 5 14:29:56.000486 containerd[1548]: time="2024-09-05T14:29:56.000467091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 5 14:29:56.012305 containerd[1548]: time="2024-09-05T14:29:56.012279079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 5 14:29:57.408690 containerd[1548]: time="2024-09-05T14:29:57.408635141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:57.408902 containerd[1548]: time="2024-09-05T14:29:57.408876404Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777" Sep 5 14:29:57.409259 containerd[1548]: time="2024-09-05T14:29:57.409220057Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:57.410769 containerd[1548]: time="2024-09-05T14:29:57.410734302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:57.411437 containerd[1548]: time="2024-09-05T14:29:57.411396143Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.399088145s" Sep 5 14:29:57.411437 containerd[1548]: time="2024-09-05T14:29:57.411411279Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 5 14:29:57.422508 containerd[1548]: time="2024-09-05T14:29:57.422461522Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 5 14:29:58.343454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083151922.mount: Deactivated successfully. Sep 5 14:29:58.525604 containerd[1548]: time="2024-09-05T14:29:58.525579572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:58.525818 containerd[1548]: time="2024-09-05T14:29:58.525691224Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449" Sep 5 14:29:58.526040 containerd[1548]: time="2024-09-05T14:29:58.526028187Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:58.527003 containerd[1548]: time="2024-09-05T14:29:58.526990163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:58.527399 containerd[1548]: time="2024-09-05T14:29:58.527385422Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 1.104903058s" Sep 5 14:29:58.527439 containerd[1548]: time="2024-09-05T14:29:58.527403392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 5 14:29:58.538660 containerd[1548]: time="2024-09-05T14:29:58.538638881Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 5 14:29:59.020487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324437685.mount: Deactivated successfully. Sep 5 14:29:59.021804 containerd[1548]: time="2024-09-05T14:29:59.021785542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:59.021992 containerd[1548]: time="2024-09-05T14:29:59.021972984Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 5 14:29:59.022283 containerd[1548]: time="2024-09-05T14:29:59.022272623Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:59.023673 containerd[1548]: time="2024-09-05T14:29:59.023633652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:29:59.024121 containerd[1548]: time="2024-09-05T14:29:59.024086924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 485.427501ms" Sep 5 14:29:59.024121 containerd[1548]: time="2024-09-05T14:29:59.024116562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 5 14:29:59.036251 containerd[1548]: time="2024-09-05T14:29:59.036231227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 5 14:29:59.576189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876767697.mount: Deactivated successfully. Sep 5 14:30:02.748406 containerd[1548]: time="2024-09-05T14:30:02.748348845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:02.748636 containerd[1548]: time="2024-09-05T14:30:02.748575961Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 5 14:30:02.748984 containerd[1548]: time="2024-09-05T14:30:02.748945103Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:02.751007 containerd[1548]: time="2024-09-05T14:30:02.750965125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:02.751571 containerd[1548]: time="2024-09-05T14:30:02.751523444Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.715273445s" Sep 5 14:30:02.751571 containerd[1548]: time="2024-09-05T14:30:02.751538407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 5 14:30:02.762481 containerd[1548]: time="2024-09-05T14:30:02.762419925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 5 14:30:03.337103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911015924.mount: Deactivated successfully. Sep 5 14:30:03.980193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 14:30:03.995519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:30:04.202824 containerd[1548]: time="2024-09-05T14:30:04.202796499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:04.202995 containerd[1548]: time="2024-09-05T14:30:04.202968637Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Sep 5 14:30:04.203318 containerd[1548]: time="2024-09-05T14:30:04.203304869Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:04.204434 containerd[1548]: time="2024-09-05T14:30:04.204423402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:04.205192 containerd[1548]: time="2024-09-05T14:30:04.205181650Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.442742001s" Sep 5 14:30:04.205215 containerd[1548]: time="2024-09-05T14:30:04.205196402Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 5 14:30:04.205672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:30:04.207960 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 14:30:04.231241 kubelet[2207]: E0905 14:30:04.231137 2207 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 14:30:04.232463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 14:30:04.232539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 14:30:06.034510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:30:06.054639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:30:06.067004 systemd[1]: Reloading requested from client PID 2373 ('systemctl') (unit session-9.scope)... Sep 5 14:30:06.067011 systemd[1]: Reloading... Sep 5 14:30:06.138366 zram_generator::config[2410]: No configuration found. Sep 5 14:30:06.200900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 14:30:06.261204 systemd[1]: Reloading finished in 193 ms. Sep 5 14:30:06.311770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:30:06.313049 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:30:06.314069 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 14:30:06.314168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:30:06.315044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:30:06.516931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:30:06.519238 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 14:30:06.542343 kubelet[2477]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 14:30:06.542343 kubelet[2477]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 14:30:06.542343 kubelet[2477]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 14:30:06.544515 kubelet[2477]: I0905 14:30:06.544463 2477 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 14:30:06.650394 kubelet[2477]: I0905 14:30:06.650335 2477 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 5 14:30:06.650394 kubelet[2477]: I0905 14:30:06.650346 2477 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 14:30:06.650442 kubelet[2477]: I0905 14:30:06.650437 2477 server.go:895] "Client rotation is on, will bootstrap in background" Sep 5 14:30:06.660308 kubelet[2477]: I0905 14:30:06.660274 2477 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 14:30:06.661877 kubelet[2477]: E0905 14:30:06.661854 2477 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.90.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.675408 kubelet[2477]: I0905 14:30:06.675395 2477 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 14:30:06.676468 kubelet[2477]: I0905 14:30:06.676429 2477 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 14:30:06.676560 kubelet[2477]: I0905 14:30:06.676524 2477 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 5 14:30:06.676906 kubelet[2477]: I0905 14:30:06.676871 2477 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 14:30:06.676906 kubelet[2477]: I0905 14:30:06.676879 2477 container_manager_linux.go:301] "Creating device plugin manager" Sep 5 14:30:06.677816 kubelet[2477]: I0905 14:30:06.677781 2477 state_mem.go:36] "Initialized new in-memory state store" Sep 5 14:30:06.679860 kubelet[2477]: I0905 14:30:06.679823 2477 kubelet.go:393] "Attempting to sync node with API server" Sep 5 14:30:06.679860 kubelet[2477]: I0905 14:30:06.679833 2477 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 14:30:06.679860 kubelet[2477]: I0905 14:30:06.679847 2477 kubelet.go:309] "Adding apiserver pod source" Sep 5 14:30:06.679860 kubelet[2477]: I0905 14:30:06.679855 2477 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 14:30:06.680980 kubelet[2477]: I0905 14:30:06.680956 2477 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 5 14:30:06.681733 kubelet[2477]: W0905 14:30:06.681712 2477 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.90.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-f4c57b7dbd&limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.681777 kubelet[2477]: E0905 14:30:06.681738 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.90.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-f4c57b7dbd&limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.682184 kubelet[2477]: W0905 14:30:06.682175 2477 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 14:30:06.682521 kubelet[2477]: I0905 14:30:06.682513 2477 server.go:1232] "Started kubelet" Sep 5 14:30:06.682606 kubelet[2477]: I0905 14:30:06.682579 2477 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 14:30:06.682721 kubelet[2477]: W0905 14:30:06.682659 2477 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.90.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.682753 kubelet[2477]: E0905 14:30:06.682724 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.90.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.682856 kubelet[2477]: E0905 14:30:06.682843 2477 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 5 14:30:06.682856 kubelet[2477]: E0905 14:30:06.682874 2477 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 14:30:06.683598 kubelet[2477]: I0905 14:30:06.683572 2477 server.go:462] "Adding debug handlers to kubelet server" Sep 5 14:30:06.686291 kubelet[2477]: I0905 14:30:06.686276 2477 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 5 14:30:06.686534 kubelet[2477]: E0905 14:30:06.686445 2477 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4054.1.0-a-f4c57b7dbd.17f25f7f16b0f34b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4054.1.0-a-f4c57b7dbd", UID:"ci-4054.1.0-a-f4c57b7dbd", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4054.1.0-a-f4c57b7dbd"}, FirstTimestamp:time.Date(2024, time.September, 5, 14, 30, 6, 682501963, time.Local), LastTimestamp:time.Date(2024, time.September, 5, 14, 30, 6, 682501963, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4054.1.0-a-f4c57b7dbd"}': 'Post "https://147.75.90.7:6443/api/v1/namespaces/default/events": dial tcp 147.75.90.7:6443: connect: connection refused'(may retry after sleeping) Sep 5 14:30:06.686614 kubelet[2477]: I0905 14:30:06.686551 2477 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 14:30:06.687421 kubelet[2477]: I0905 14:30:06.687413 2477 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 14:30:06.687541 kubelet[2477]: I0905 14:30:06.687529 2477 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 5 14:30:06.687576 kubelet[2477]: I0905 14:30:06.687552 2477 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 5 14:30:06.687605 kubelet[2477]: I0905 14:30:06.687598 2477 reconciler_new.go:29] "Reconciler: start to sync state" Sep 5 14:30:06.687671 kubelet[2477]: E0905 14:30:06.687663 2477 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-f4c57b7dbd?timeout=10s\": dial tcp 147.75.90.7:6443: connect: connection refused" interval="200ms" Sep 5 14:30:06.687715 kubelet[2477]: W0905 14:30:06.687696 2477 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.90.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.687734 kubelet[2477]: E0905 14:30:06.687724 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.696098 kubelet[2477]: I0905 14:30:06.696085 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 14:30:06.696678 kubelet[2477]: I0905 14:30:06.696623 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 14:30:06.696678 kubelet[2477]: I0905 14:30:06.696649 2477 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 14:30:06.696678 kubelet[2477]: I0905 14:30:06.696676 2477 kubelet.go:2303] "Starting kubelet main sync loop" Sep 5 14:30:06.696753 kubelet[2477]: E0905 14:30:06.696699 2477 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 14:30:06.697091 kubelet[2477]: W0905 14:30:06.697072 2477 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.90.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.697123 kubelet[2477]: E0905 14:30:06.697100 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:06.797654 kubelet[2477]: E0905 14:30:06.797549 2477 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 14:30:06.847890 kubelet[2477]: I0905 14:30:06.847799 2477 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:06.848640 kubelet[2477]: E0905 14:30:06.848552 2477 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.90.7:6443/api/v1/nodes\": dial tcp 147.75.90.7:6443: connect: connection refused" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:06.849109 kubelet[2477]: I0905 14:30:06.849027 2477 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 14:30:06.849109 kubelet[2477]: I0905 14:30:06.849069 2477 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 14:30:06.849109 kubelet[2477]: I0905 14:30:06.849109 2477 state_mem.go:36] "Initialized new in-memory state store" Sep 5 14:30:06.851373 kubelet[2477]: I0905 14:30:06.851326 2477 policy_none.go:49] "None policy: Start" Sep 5 14:30:06.851914 kubelet[2477]: I0905 14:30:06.851867 2477 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 5 14:30:06.851914 kubelet[2477]: I0905 14:30:06.851898 2477 state_mem.go:35] "Initializing new in-memory state store" Sep 5 14:30:06.857406 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 14:30:06.871050 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 14:30:06.872785 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 14:30:06.886913 kubelet[2477]: I0905 14:30:06.886873 2477 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 14:30:06.887053 kubelet[2477]: I0905 14:30:06.887042 2477 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 14:30:06.887457 kubelet[2477]: E0905 14:30:06.887414 2477 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:06.888173 kubelet[2477]: E0905 14:30:06.888081 2477 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-f4c57b7dbd?timeout=10s\": dial tcp 147.75.90.7:6443: connect: connection refused" interval="400ms" Sep 5 14:30:06.998814 kubelet[2477]: I0905 14:30:06.998581 2477 topology_manager.go:215] "Topology Admit Handler" podUID="daa97420ffa5640eb66c3fec4cd9a7d5" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.003676 kubelet[2477]: I0905 14:30:07.003591 2477 topology_manager.go:215] "Topology Admit Handler" podUID="15bd4b1ec2257f093a54bc779c9f519d" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.004686 kubelet[2477]: I0905 14:30:07.004643 2477 topology_manager.go:215] "Topology Admit Handler" podUID="c73ce240b0735a2a014ddc28a4f7a62c" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.008384 systemd[1]: Created slice kubepods-burstable-poddaa97420ffa5640eb66c3fec4cd9a7d5.slice - libcontainer container kubepods-burstable-poddaa97420ffa5640eb66c3fec4cd9a7d5.slice. Sep 5 14:30:07.034722 systemd[1]: Created slice kubepods-burstable-pod15bd4b1ec2257f093a54bc779c9f519d.slice - libcontainer container kubepods-burstable-pod15bd4b1ec2257f093a54bc779c9f519d.slice. Sep 5 14:30:07.050861 systemd[1]: Created slice kubepods-burstable-podc73ce240b0735a2a014ddc28a4f7a62c.slice - libcontainer container kubepods-burstable-podc73ce240b0735a2a014ddc28a4f7a62c.slice. Sep 5 14:30:07.051276 kubelet[2477]: I0905 14:30:07.051255 2477 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.051656 kubelet[2477]: E0905 14:30:07.051608 2477 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.90.7:6443/api/v1/nodes\": dial tcp 147.75.90.7:6443: connect: connection refused" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.089962 kubelet[2477]: I0905 14:30:07.089861 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa97420ffa5640eb66c3fec4cd9a7d5-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"daa97420ffa5640eb66c3fec4cd9a7d5\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.089962 kubelet[2477]: I0905 14:30:07.089964 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa97420ffa5640eb66c3fec4cd9a7d5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"daa97420ffa5640eb66c3fec4cd9a7d5\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.090277 kubelet[2477]: I0905 14:30:07.090054 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.090277 kubelet[2477]: I0905 14:30:07.090118 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.090277 kubelet[2477]: I0905 14:30:07.090178 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.090277 kubelet[2477]: I0905 14:30:07.090234 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa97420ffa5640eb66c3fec4cd9a7d5-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"daa97420ffa5640eb66c3fec4cd9a7d5\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.090649 kubelet[2477]: I0905 14:30:07.090313 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.090649 kubelet[2477]: I0905 14:30:07.090416 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.090649 kubelet[2477]: I0905 14:30:07.090473 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c73ce240b0735a2a014ddc28a4f7a62c-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"c73ce240b0735a2a014ddc28a4f7a62c\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.289222 kubelet[2477]: E0905 14:30:07.288992 2477 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054.1.0-a-f4c57b7dbd?timeout=10s\": dial tcp 147.75.90.7:6443: connect: connection refused" interval="800ms" Sep 5 14:30:07.334175 containerd[1548]: time="2024-09-05T14:30:07.334028717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-f4c57b7dbd,Uid:daa97420ffa5640eb66c3fec4cd9a7d5,Namespace:kube-system,Attempt:0,}" Sep 5 14:30:07.348658 containerd[1548]: time="2024-09-05T14:30:07.348605514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd,Uid:15bd4b1ec2257f093a54bc779c9f519d,Namespace:kube-system,Attempt:0,}" Sep 5 14:30:07.354178 containerd[1548]: time="2024-09-05T14:30:07.354135766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-f4c57b7dbd,Uid:c73ce240b0735a2a014ddc28a4f7a62c,Namespace:kube-system,Attempt:0,}" Sep 5 14:30:07.453685 kubelet[2477]: I0905 14:30:07.453640 2477 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.453858 kubelet[2477]: E0905 14:30:07.453822 2477 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.90.7:6443/api/v1/nodes\": dial tcp 147.75.90.7:6443: connect: connection refused" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:07.837388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215685986.mount: Deactivated successfully. Sep 5 14:30:07.838554 containerd[1548]: time="2024-09-05T14:30:07.838505962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 14:30:07.838750 containerd[1548]: time="2024-09-05T14:30:07.838732239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 14:30:07.839390 containerd[1548]: time="2024-09-05T14:30:07.839334341Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 14:30:07.839889 containerd[1548]: time="2024-09-05T14:30:07.839846473Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 14:30:07.840100 containerd[1548]: time="2024-09-05T14:30:07.840081826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 14:30:07.840679 containerd[1548]: time="2024-09-05T14:30:07.840655860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 14:30:07.840717 containerd[1548]: time="2024-09-05T14:30:07.840695688Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 14:30:07.842491 containerd[1548]: time="2024-09-05T14:30:07.842446797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.25394ms" Sep 5 14:30:07.843453 containerd[1548]: time="2024-09-05T14:30:07.843434040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 14:30:07.844677 containerd[1548]: time="2024-09-05T14:30:07.844615998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.448837ms" Sep 5 14:30:07.845740 containerd[1548]: time="2024-09-05T14:30:07.845689566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.046444ms" Sep 5 14:30:07.931433 kubelet[2477]: W0905 14:30:07.931370 2477 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.90.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-f4c57b7dbd&limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:07.931433 kubelet[2477]: E0905 14:30:07.931414 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.90.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054.1.0-a-f4c57b7dbd&limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:07.950182 containerd[1548]: time="2024-09-05T14:30:07.950094102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:07.950182 containerd[1548]: time="2024-09-05T14:30:07.950157420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:07.950182 containerd[1548]: time="2024-09-05T14:30:07.950166441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950214325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950198452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950235190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950245370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950242429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950270728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950279549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950302331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:07.950336 containerd[1548]: time="2024-09-05T14:30:07.950333141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:07.972562 systemd[1]: Started cri-containerd-16be3fea589f7061def982ace6994524469b28a77328d2b5ff68feda2646a9bf.scope - libcontainer container 16be3fea589f7061def982ace6994524469b28a77328d2b5ff68feda2646a9bf. Sep 5 14:30:07.973393 systemd[1]: Started cri-containerd-3c5dc6c3f69345e9fa6ba70d4478deb17b86b0b959b545b868766d812207f416.scope - libcontainer container 3c5dc6c3f69345e9fa6ba70d4478deb17b86b0b959b545b868766d812207f416. Sep 5 14:30:07.974241 systemd[1]: Started cri-containerd-3dcf46b0333d69a4d0c7048a260881f9c5cac363e6f2b191d6eb1bd6d6503e80.scope - libcontainer container 3dcf46b0333d69a4d0c7048a260881f9c5cac363e6f2b191d6eb1bd6d6503e80. Sep 5 14:30:07.986966 kubelet[2477]: W0905 14:30:07.986904 2477 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.90.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:07.986966 kubelet[2477]: E0905 14:30:07.986943 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.7:6443: connect: connection refused Sep 5 14:30:07.999322 containerd[1548]: time="2024-09-05T14:30:07.999290182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054.1.0-a-f4c57b7dbd,Uid:daa97420ffa5640eb66c3fec4cd9a7d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"16be3fea589f7061def982ace6994524469b28a77328d2b5ff68feda2646a9bf\"" Sep 5 14:30:08.000695 containerd[1548]: time="2024-09-05T14:30:08.000673905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054.1.0-a-f4c57b7dbd,Uid:c73ce240b0735a2a014ddc28a4f7a62c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c5dc6c3f69345e9fa6ba70d4478deb17b86b0b959b545b868766d812207f416\"" Sep 5 14:30:08.000767 containerd[1548]: time="2024-09-05T14:30:08.000726266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd,Uid:15bd4b1ec2257f093a54bc779c9f519d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dcf46b0333d69a4d0c7048a260881f9c5cac363e6f2b191d6eb1bd6d6503e80\"" Sep 5 14:30:08.001871 containerd[1548]: time="2024-09-05T14:30:08.001857833Z" level=info msg="CreateContainer within sandbox \"16be3fea589f7061def982ace6994524469b28a77328d2b5ff68feda2646a9bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 14:30:08.001921 containerd[1548]: time="2024-09-05T14:30:08.001859290Z" level=info msg="CreateContainer within sandbox \"3c5dc6c3f69345e9fa6ba70d4478deb17b86b0b959b545b868766d812207f416\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 14:30:08.002004 containerd[1548]: time="2024-09-05T14:30:08.001991077Z" level=info msg="CreateContainer within sandbox \"3dcf46b0333d69a4d0c7048a260881f9c5cac363e6f2b191d6eb1bd6d6503e80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 14:30:08.007961 containerd[1548]: time="2024-09-05T14:30:08.007919733Z" level=info msg="CreateContainer within sandbox \"3c5dc6c3f69345e9fa6ba70d4478deb17b86b0b959b545b868766d812207f416\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04e859d6964ade6c472de08c3a85d9e94b5757c66132402322e31072625815fc\"" Sep 5 14:30:08.008202 containerd[1548]: time="2024-09-05T14:30:08.008163059Z" level=info msg="StartContainer for \"04e859d6964ade6c472de08c3a85d9e94b5757c66132402322e31072625815fc\"" Sep 5 14:30:08.011685 containerd[1548]: time="2024-09-05T14:30:08.011629552Z" level=info msg="CreateContainer within sandbox \"3dcf46b0333d69a4d0c7048a260881f9c5cac363e6f2b191d6eb1bd6d6503e80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1491a4cadb4d19601148d8eab36a1aa2aaffeea65269f4cc8e1b996a009d028e\"" Sep 5 14:30:08.011976 containerd[1548]: time="2024-09-05T14:30:08.011919056Z" level=info msg="StartContainer for \"1491a4cadb4d19601148d8eab36a1aa2aaffeea65269f4cc8e1b996a009d028e\"" Sep 5 14:30:08.012040 containerd[1548]: time="2024-09-05T14:30:08.012026470Z" level=info msg="CreateContainer within sandbox \"16be3fea589f7061def982ace6994524469b28a77328d2b5ff68feda2646a9bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c2e77fb8dcab62cc74b988bebfb0716fc4bdbb8c3f64f44ee284b361d84bc39\"" Sep 5 14:30:08.012192 containerd[1548]: time="2024-09-05T14:30:08.012182634Z" level=info msg="StartContainer for \"2c2e77fb8dcab62cc74b988bebfb0716fc4bdbb8c3f64f44ee284b361d84bc39\"" Sep 5 14:30:08.030612 systemd[1]: Started cri-containerd-04e859d6964ade6c472de08c3a85d9e94b5757c66132402322e31072625815fc.scope - libcontainer container 04e859d6964ade6c472de08c3a85d9e94b5757c66132402322e31072625815fc. Sep 5 14:30:08.032706 systemd[1]: Started cri-containerd-1491a4cadb4d19601148d8eab36a1aa2aaffeea65269f4cc8e1b996a009d028e.scope - libcontainer container 1491a4cadb4d19601148d8eab36a1aa2aaffeea65269f4cc8e1b996a009d028e. Sep 5 14:30:08.033260 systemd[1]: Started cri-containerd-2c2e77fb8dcab62cc74b988bebfb0716fc4bdbb8c3f64f44ee284b361d84bc39.scope - libcontainer container 2c2e77fb8dcab62cc74b988bebfb0716fc4bdbb8c3f64f44ee284b361d84bc39. Sep 5 14:30:08.053662 containerd[1548]: time="2024-09-05T14:30:08.053636495Z" level=info msg="StartContainer for \"04e859d6964ade6c472de08c3a85d9e94b5757c66132402322e31072625815fc\" returns successfully" Sep 5 14:30:08.055093 containerd[1548]: time="2024-09-05T14:30:08.055075677Z" level=info msg="StartContainer for \"1491a4cadb4d19601148d8eab36a1aa2aaffeea65269f4cc8e1b996a009d028e\" returns successfully" Sep 5 14:30:08.055886 containerd[1548]: time="2024-09-05T14:30:08.055871129Z" level=info msg="StartContainer for \"2c2e77fb8dcab62cc74b988bebfb0716fc4bdbb8c3f64f44ee284b361d84bc39\" returns successfully" Sep 5 14:30:08.255301 kubelet[2477]: I0905 14:30:08.255241 2477 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:08.867090 kubelet[2477]: E0905 14:30:08.866604 2477 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4054.1.0-a-f4c57b7dbd\" not found" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:08.977693 kubelet[2477]: I0905 14:30:08.977597 2477 kubelet_node_status.go:73] "Successfully registered node" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:08.989401 kubelet[2477]: E0905 14:30:08.989340 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.090174 kubelet[2477]: E0905 14:30:09.090107 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.190846 kubelet[2477]: E0905 14:30:09.190633 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.290874 kubelet[2477]: E0905 14:30:09.290770 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.391008 kubelet[2477]: E0905 14:30:09.390910 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.492268 kubelet[2477]: E0905 14:30:09.492056 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.593186 kubelet[2477]: E0905 14:30:09.593082 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.694182 kubelet[2477]: E0905 14:30:09.694085 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.795067 kubelet[2477]: E0905 14:30:09.794852 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.895098 kubelet[2477]: E0905 14:30:09.895006 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:09.995861 kubelet[2477]: E0905 14:30:09.995797 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:10.096888 kubelet[2477]: E0905 14:30:10.096676 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:10.197861 kubelet[2477]: E0905 14:30:10.197763 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:10.298897 kubelet[2477]: E0905 14:30:10.298788 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:10.399251 kubelet[2477]: E0905 14:30:10.399143 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:10.499396 kubelet[2477]: E0905 14:30:10.499343 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:10.600597 kubelet[2477]: E0905 14:30:10.600512 2477 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4054.1.0-a-f4c57b7dbd\" not found" Sep 5 14:30:11.682705 kubelet[2477]: I0905 14:30:11.682655 2477 apiserver.go:52] "Watching apiserver" Sep 5 14:30:11.683565 systemd[1]: Reloading requested from client PID 2795 ('systemctl') (unit session-9.scope)... Sep 5 14:30:11.683595 systemd[1]: Reloading... Sep 5 14:30:11.688166 kubelet[2477]: I0905 14:30:11.688114 2477 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 5 14:30:11.744299 zram_generator::config[2832]: No configuration found. Sep 5 14:30:11.806892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 14:30:11.875817 systemd[1]: Reloading finished in 191 ms. Sep 5 14:30:11.903614 kubelet[2477]: I0905 14:30:11.903599 2477 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 14:30:11.903655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:30:11.913780 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 14:30:11.913913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:30:11.935649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 14:30:12.133682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 14:30:12.135941 (kubelet)[2893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 14:30:12.162005 kubelet[2893]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 14:30:12.162005 kubelet[2893]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 14:30:12.162005 kubelet[2893]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 14:30:12.162262 kubelet[2893]: I0905 14:30:12.162035 2893 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 14:30:12.164677 kubelet[2893]: I0905 14:30:12.164637 2893 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 5 14:30:12.164677 kubelet[2893]: I0905 14:30:12.164650 2893 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 14:30:12.164809 kubelet[2893]: I0905 14:30:12.164771 2893 server.go:895] "Client rotation is on, will bootstrap in background" Sep 5 14:30:12.165737 kubelet[2893]: I0905 14:30:12.165701 2893 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 14:30:12.166297 kubelet[2893]: I0905 14:30:12.166280 2893 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 14:30:12.174896 kubelet[2893]: I0905 14:30:12.174857 2893 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 14:30:12.175005 kubelet[2893]: I0905 14:30:12.174962 2893 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 14:30:12.175084 kubelet[2893]: I0905 14:30:12.175051 2893 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 5 14:30:12.175084 kubelet[2893]: I0905 14:30:12.175062 2893 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 14:30:12.175084 kubelet[2893]: I0905 14:30:12.175068 2893 container_manager_linux.go:301] "Creating device plugin manager" Sep 5 14:30:12.175194 kubelet[2893]: I0905 14:30:12.175088 2893 state_mem.go:36] "Initialized new in-memory state store" Sep 5 14:30:12.175194 kubelet[2893]: I0905 14:30:12.175134 2893 kubelet.go:393] "Attempting to sync node with API server" Sep 5 14:30:12.175194 kubelet[2893]: I0905 14:30:12.175141 2893 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 14:30:12.175194 kubelet[2893]: I0905 14:30:12.175154 2893 kubelet.go:309] "Adding apiserver pod source" Sep 5 14:30:12.175194 kubelet[2893]: I0905 14:30:12.175162 2893 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 14:30:12.175612 kubelet[2893]: I0905 14:30:12.175562 2893 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 5 14:30:12.175865 kubelet[2893]: I0905 14:30:12.175835 2893 server.go:1232] "Started kubelet" Sep 5 14:30:12.175906 kubelet[2893]: I0905 14:30:12.175897 2893 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 14:30:12.175906 kubelet[2893]: I0905 14:30:12.175901 2893 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 5 14:30:12.176018 kubelet[2893]: I0905 14:30:12.176009 2893 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 14:30:12.176213 kubelet[2893]: E0905 14:30:12.176204 2893 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 5 14:30:12.176242 kubelet[2893]: E0905 14:30:12.176218 2893 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 14:30:12.176474 kubelet[2893]: I0905 14:30:12.176464 2893 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 14:30:12.176594 kubelet[2893]: I0905 14:30:12.176502 2893 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 5 14:30:12.176594 kubelet[2893]: I0905 14:30:12.176529 2893 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 5 14:30:12.176664 kubelet[2893]: I0905 14:30:12.176620 2893 reconciler_new.go:29] "Reconciler: start to sync state" Sep 5 14:30:12.176737 kubelet[2893]: I0905 14:30:12.176727 2893 server.go:462] "Adding debug handlers to kubelet server" Sep 5 14:30:12.181994 kubelet[2893]: I0905 14:30:12.181978 2893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 14:30:12.182577 kubelet[2893]: I0905 14:30:12.182566 2893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 14:30:12.182577 kubelet[2893]: I0905 14:30:12.182579 2893 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 14:30:12.182654 kubelet[2893]: I0905 14:30:12.182590 2893 kubelet.go:2303] "Starting kubelet main sync loop" Sep 5 14:30:12.182654 kubelet[2893]: E0905 14:30:12.182617 2893 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 14:30:12.198189 kubelet[2893]: I0905 14:30:12.198139 2893 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 14:30:12.198189 kubelet[2893]: I0905 14:30:12.198151 2893 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 14:30:12.198189 kubelet[2893]: I0905 14:30:12.198161 2893 state_mem.go:36] "Initialized new in-memory state store" Sep 5 14:30:12.198281 kubelet[2893]: I0905 14:30:12.198252 2893 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 14:30:12.198281 kubelet[2893]: I0905 14:30:12.198265 2893 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 14:30:12.198281 kubelet[2893]: I0905 14:30:12.198268 2893 policy_none.go:49] "None policy: Start" Sep 5 14:30:12.198715 kubelet[2893]: I0905 14:30:12.198674 2893 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 5 14:30:12.198715 kubelet[2893]: I0905 14:30:12.198694 2893 state_mem.go:35] "Initializing new in-memory state store" Sep 5 14:30:12.198872 kubelet[2893]: I0905 14:30:12.198841 2893 state_mem.go:75] "Updated machine memory state" Sep 5 14:30:12.200694 kubelet[2893]: I0905 14:30:12.200676 2893 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 14:30:12.200852 kubelet[2893]: I0905 14:30:12.200828 2893 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 14:30:12.278714 kubelet[2893]: I0905 14:30:12.278699 2893 kubelet_node_status.go:70] "Attempting to register node" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.282153 kubelet[2893]: I0905 14:30:12.282142 2893 kubelet_node_status.go:108] "Node was previously registered" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.282209 kubelet[2893]: I0905 14:30:12.282188 2893 kubelet_node_status.go:73] "Successfully registered node" node="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.282754 kubelet[2893]: I0905 14:30:12.282744 2893 topology_manager.go:215] "Topology Admit Handler" podUID="daa97420ffa5640eb66c3fec4cd9a7d5" podNamespace="kube-system" podName="kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.282809 kubelet[2893]: I0905 14:30:12.282801 2893 topology_manager.go:215] "Topology Admit Handler" podUID="15bd4b1ec2257f093a54bc779c9f519d" podNamespace="kube-system" podName="kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.282835 kubelet[2893]: I0905 14:30:12.282827 2893 topology_manager.go:215] "Topology Admit Handler" podUID="c73ce240b0735a2a014ddc28a4f7a62c" podNamespace="kube-system" podName="kube-scheduler-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.287730 kubelet[2893]: W0905 14:30:12.287715 2893 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 14:30:12.287803 kubelet[2893]: W0905 14:30:12.287753 2893 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 14:30:12.288267 kubelet[2893]: W0905 14:30:12.288259 2893 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 14:30:12.478510 kubelet[2893]: I0905 14:30:12.478407 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa97420ffa5640eb66c3fec4cd9a7d5-k8s-certs\") pod \"kube-apiserver-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"daa97420ffa5640eb66c3fec4cd9a7d5\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478510 kubelet[2893]: I0905 14:30:12.478434 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa97420ffa5640eb66c3fec4cd9a7d5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"daa97420ffa5640eb66c3fec4cd9a7d5\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478510 kubelet[2893]: I0905 14:30:12.478448 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-ca-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478510 kubelet[2893]: I0905 14:30:12.478459 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-flexvolume-dir\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478510 kubelet[2893]: I0905 14:30:12.478471 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-k8s-certs\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478681 kubelet[2893]: I0905 14:30:12.478483 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478681 kubelet[2893]: I0905 14:30:12.478512 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c73ce240b0735a2a014ddc28a4f7a62c-kubeconfig\") pod \"kube-scheduler-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"c73ce240b0735a2a014ddc28a4f7a62c\") " pod="kube-system/kube-scheduler-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478681 kubelet[2893]: I0905 14:30:12.478539 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa97420ffa5640eb66c3fec4cd9a7d5-ca-certs\") pod \"kube-apiserver-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"daa97420ffa5640eb66c3fec4cd9a7d5\") " pod="kube-system/kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:12.478681 kubelet[2893]: I0905 14:30:12.478571 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15bd4b1ec2257f093a54bc779c9f519d-kubeconfig\") pod \"kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd\" (UID: \"15bd4b1ec2257f093a54bc779c9f519d\") " pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:13.176228 kubelet[2893]: I0905 14:30:13.176178 2893 apiserver.go:52] "Watching apiserver" Sep 5 14:30:13.195884 kubelet[2893]: I0905 14:30:13.195865 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054.1.0-a-f4c57b7dbd" podStartSLOduration=1.195833181 podCreationTimestamp="2024-09-05 14:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 14:30:13.195753463 +0000 UTC m=+1.057912101" watchObservedRunningTime="2024-09-05 14:30:13.195833181 +0000 UTC m=+1.057991822" Sep 5 14:30:13.199331 kubelet[2893]: I0905 14:30:13.199316 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054.1.0-a-f4c57b7dbd" podStartSLOduration=1.199295911 podCreationTimestamp="2024-09-05 14:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 14:30:13.199182462 +0000 UTC m=+1.061341100" watchObservedRunningTime="2024-09-05 14:30:13.199295911 +0000 UTC m=+1.061454547" Sep 5 14:30:13.205454 kubelet[2893]: I0905 14:30:13.205434 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054.1.0-a-f4c57b7dbd" podStartSLOduration=1.205408332 podCreationTimestamp="2024-09-05 14:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 14:30:13.201943265 +0000 UTC m=+1.064101904" watchObservedRunningTime="2024-09-05 14:30:13.205408332 +0000 UTC m=+1.067566970" Sep 5 14:30:13.278615 kubelet[2893]: I0905 14:30:13.277534 2893 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 5 14:30:16.029857 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 5 14:30:16.030772 sshd[1762]: pam_unix(sshd:session): session closed for user core Sep 5 14:30:16.032123 systemd[1]: sshd@6-147.75.90.7:22-139.178.89.65:58592.service: Deactivated successfully. Sep 5 14:30:16.032991 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 14:30:16.033086 systemd[1]: session-9.scope: Consumed 3.539s CPU time, 148.2M memory peak, 0B memory swap peak. Sep 5 14:30:16.033672 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Sep 5 14:30:16.034155 systemd-logind[1530]: Removed session 9. Sep 5 14:30:24.209721 kubelet[2893]: I0905 14:30:24.209692 2893 topology_manager.go:215] "Topology Admit Handler" podUID="99fa5c7a-b849-4304-bcab-4d51ab965641" podNamespace="kube-system" podName="kube-proxy-czp5l" Sep 5 14:30:24.213608 systemd[1]: Created slice kubepods-besteffort-pod99fa5c7a_b849_4304_bcab_4d51ab965641.slice - libcontainer container kubepods-besteffort-pod99fa5c7a_b849_4304_bcab_4d51ab965641.slice. Sep 5 14:30:24.268903 kubelet[2893]: I0905 14:30:24.268853 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99fa5c7a-b849-4304-bcab-4d51ab965641-lib-modules\") pod \"kube-proxy-czp5l\" (UID: \"99fa5c7a-b849-4304-bcab-4d51ab965641\") " pod="kube-system/kube-proxy-czp5l" Sep 5 14:30:24.268903 kubelet[2893]: I0905 14:30:24.268886 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99fa5c7a-b849-4304-bcab-4d51ab965641-xtables-lock\") pod \"kube-proxy-czp5l\" (UID: \"99fa5c7a-b849-4304-bcab-4d51ab965641\") " pod="kube-system/kube-proxy-czp5l" Sep 5 14:30:24.268903 kubelet[2893]: I0905 14:30:24.268904 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99fa5c7a-b849-4304-bcab-4d51ab965641-kube-proxy\") pod \"kube-proxy-czp5l\" (UID: \"99fa5c7a-b849-4304-bcab-4d51ab965641\") " pod="kube-system/kube-proxy-czp5l" Sep 5 14:30:24.269065 kubelet[2893]: I0905 14:30:24.268926 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tf2q\" (UniqueName: \"kubernetes.io/projected/99fa5c7a-b849-4304-bcab-4d51ab965641-kube-api-access-6tf2q\") pod \"kube-proxy-czp5l\" (UID: \"99fa5c7a-b849-4304-bcab-4d51ab965641\") " pod="kube-system/kube-proxy-czp5l" Sep 5 14:30:24.281498 kubelet[2893]: I0905 14:30:24.281474 2893 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 14:30:24.281863 containerd[1548]: time="2024-09-05T14:30:24.281802912Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 14:30:24.282203 kubelet[2893]: I0905 14:30:24.281974 2893 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 14:30:24.530056 containerd[1548]: time="2024-09-05T14:30:24.529914964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czp5l,Uid:99fa5c7a-b849-4304-bcab-4d51ab965641,Namespace:kube-system,Attempt:0,}" Sep 5 14:30:24.541197 containerd[1548]: time="2024-09-05T14:30:24.541085637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:24.541197 containerd[1548]: time="2024-09-05T14:30:24.541149521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:24.541197 containerd[1548]: time="2024-09-05T14:30:24.541158753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:24.541197 containerd[1548]: time="2024-09-05T14:30:24.541196189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:24.559469 systemd[1]: Started cri-containerd-beffba0bd5c3daf9c35f09bcf2808f7c547cdb4a774d3eb87131d07972d29be4.scope - libcontainer container beffba0bd5c3daf9c35f09bcf2808f7c547cdb4a774d3eb87131d07972d29be4. Sep 5 14:30:24.573143 containerd[1548]: time="2024-09-05T14:30:24.573081670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czp5l,Uid:99fa5c7a-b849-4304-bcab-4d51ab965641,Namespace:kube-system,Attempt:0,} returns sandbox id \"beffba0bd5c3daf9c35f09bcf2808f7c547cdb4a774d3eb87131d07972d29be4\"" Sep 5 14:30:24.575519 containerd[1548]: time="2024-09-05T14:30:24.575469119Z" level=info msg="CreateContainer within sandbox \"beffba0bd5c3daf9c35f09bcf2808f7c547cdb4a774d3eb87131d07972d29be4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 14:30:24.583829 containerd[1548]: time="2024-09-05T14:30:24.583778749Z" level=info msg="CreateContainer within sandbox \"beffba0bd5c3daf9c35f09bcf2808f7c547cdb4a774d3eb87131d07972d29be4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"af109a50f675cf87833d871f68ba66848dee1e92c45c379c5f1019eeac0c0fd1\"" Sep 5 14:30:24.584168 containerd[1548]: time="2024-09-05T14:30:24.584156296Z" level=info msg="StartContainer for \"af109a50f675cf87833d871f68ba66848dee1e92c45c379c5f1019eeac0c0fd1\"" Sep 5 14:30:24.606537 systemd[1]: Started cri-containerd-af109a50f675cf87833d871f68ba66848dee1e92c45c379c5f1019eeac0c0fd1.scope - libcontainer container af109a50f675cf87833d871f68ba66848dee1e92c45c379c5f1019eeac0c0fd1. Sep 5 14:30:24.624457 containerd[1548]: time="2024-09-05T14:30:24.624394697Z" level=info msg="StartContainer for \"af109a50f675cf87833d871f68ba66848dee1e92c45c379c5f1019eeac0c0fd1\" returns successfully" Sep 5 14:30:25.105946 kubelet[2893]: I0905 14:30:25.105890 2893 topology_manager.go:215] "Topology Admit Handler" podUID="30f34ffa-ccc0-4cc1-a93a-44514b8466e3" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-7sjpr" Sep 5 14:30:25.115023 systemd[1]: Created slice kubepods-besteffort-pod30f34ffa_ccc0_4cc1_a93a_44514b8466e3.slice - libcontainer container kubepods-besteffort-pod30f34ffa_ccc0_4cc1_a93a_44514b8466e3.slice. Sep 5 14:30:25.175051 kubelet[2893]: I0905 14:30:25.174980 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30f34ffa-ccc0-4cc1-a93a-44514b8466e3-var-lib-calico\") pod \"tigera-operator-5d56685c77-7sjpr\" (UID: \"30f34ffa-ccc0-4cc1-a93a-44514b8466e3\") " pod="tigera-operator/tigera-operator-5d56685c77-7sjpr" Sep 5 14:30:25.175364 kubelet[2893]: I0905 14:30:25.175174 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sr8d\" (UniqueName: \"kubernetes.io/projected/30f34ffa-ccc0-4cc1-a93a-44514b8466e3-kube-api-access-8sr8d\") pod \"tigera-operator-5d56685c77-7sjpr\" (UID: \"30f34ffa-ccc0-4cc1-a93a-44514b8466e3\") " pod="tigera-operator/tigera-operator-5d56685c77-7sjpr" Sep 5 14:30:25.398753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760830217.mount: Deactivated successfully. Sep 5 14:30:25.419661 containerd[1548]: time="2024-09-05T14:30:25.419541872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-7sjpr,Uid:30f34ffa-ccc0-4cc1-a93a-44514b8466e3,Namespace:tigera-operator,Attempt:0,}" Sep 5 14:30:25.430772 containerd[1548]: time="2024-09-05T14:30:25.430685398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:25.431027 containerd[1548]: time="2024-09-05T14:30:25.431010184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:25.431027 containerd[1548]: time="2024-09-05T14:30:25.431021118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:25.431085 containerd[1548]: time="2024-09-05T14:30:25.431063692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:25.456554 systemd[1]: Started cri-containerd-f11cb1167ac4b27ef2751be1b52ea121093b0a83ce9e63c76cd620b36e44d23d.scope - libcontainer container f11cb1167ac4b27ef2751be1b52ea121093b0a83ce9e63c76cd620b36e44d23d. Sep 5 14:30:25.488133 containerd[1548]: time="2024-09-05T14:30:25.488067129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-7sjpr,Uid:30f34ffa-ccc0-4cc1-a93a-44514b8466e3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f11cb1167ac4b27ef2751be1b52ea121093b0a83ce9e63c76cd620b36e44d23d\"" Sep 5 14:30:25.489071 containerd[1548]: time="2024-09-05T14:30:25.489050773Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 5 14:30:25.624646 kubelet[2893]: I0905 14:30:25.624600 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-czp5l" podStartSLOduration=1.62457991 podCreationTimestamp="2024-09-05 14:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 14:30:25.232564197 +0000 UTC m=+13.094722904" watchObservedRunningTime="2024-09-05 14:30:25.62457991 +0000 UTC m=+13.486738547" Sep 5 14:30:26.181349 update_engine[1535]: I0905 14:30:26.181227 1535 update_attempter.cc:509] Updating boot flags... Sep 5 14:30:26.215297 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3318) Sep 5 14:30:26.241323 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3318) Sep 5 14:30:26.267294 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 37 scanned by (udev-worker) (3318) Sep 5 14:30:26.811989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955482469.mount: Deactivated successfully. Sep 5 14:30:27.012343 containerd[1548]: time="2024-09-05T14:30:27.012320380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:27.012569 containerd[1548]: time="2024-09-05T14:30:27.012514754Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136513" Sep 5 14:30:27.012842 containerd[1548]: time="2024-09-05T14:30:27.012827241Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:27.013973 containerd[1548]: time="2024-09-05T14:30:27.013961081Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:27.014784 containerd[1548]: time="2024-09-05T14:30:27.014764859Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.525690486s" Sep 5 14:30:27.014784 containerd[1548]: time="2024-09-05T14:30:27.014780746Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 5 14:30:27.015614 containerd[1548]: time="2024-09-05T14:30:27.015601975Z" level=info msg="CreateContainer within sandbox \"f11cb1167ac4b27ef2751be1b52ea121093b0a83ce9e63c76cd620b36e44d23d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 14:30:27.019744 containerd[1548]: time="2024-09-05T14:30:27.019692460Z" level=info msg="CreateContainer within sandbox \"f11cb1167ac4b27ef2751be1b52ea121093b0a83ce9e63c76cd620b36e44d23d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8c5cb628f5c7250617b4c25eb807a026c9559dba4208b3597f5bc9dda6d6e880\"" Sep 5 14:30:27.019942 containerd[1548]: time="2024-09-05T14:30:27.019907023Z" level=info msg="StartContainer for \"8c5cb628f5c7250617b4c25eb807a026c9559dba4208b3597f5bc9dda6d6e880\"" Sep 5 14:30:27.042800 systemd[1]: Started cri-containerd-8c5cb628f5c7250617b4c25eb807a026c9559dba4208b3597f5bc9dda6d6e880.scope - libcontainer container 8c5cb628f5c7250617b4c25eb807a026c9559dba4208b3597f5bc9dda6d6e880. Sep 5 14:30:27.095073 containerd[1548]: time="2024-09-05T14:30:27.094960120Z" level=info msg="StartContainer for \"8c5cb628f5c7250617b4c25eb807a026c9559dba4208b3597f5bc9dda6d6e880\" returns successfully" Sep 5 14:30:27.239081 kubelet[2893]: I0905 14:30:27.238985 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-7sjpr" podStartSLOduration=0.712696896 podCreationTimestamp="2024-09-05 14:30:25 +0000 UTC" firstStartedPulling="2024-09-05 14:30:25.488752975 +0000 UTC m=+13.350911621" lastFinishedPulling="2024-09-05 14:30:27.014946763 +0000 UTC m=+14.877105401" observedRunningTime="2024-09-05 14:30:27.238709901 +0000 UTC m=+15.100868612" watchObservedRunningTime="2024-09-05 14:30:27.238890676 +0000 UTC m=+15.101049364" Sep 5 14:30:29.919880 kubelet[2893]: I0905 14:30:29.919833 2893 topology_manager.go:215] "Topology Admit Handler" podUID="0d4545ae-536b-4870-aaca-ae5eb4b59382" podNamespace="calico-system" podName="calico-typha-675fd9f996-nw8j4" Sep 5 14:30:29.928659 systemd[1]: Created slice kubepods-besteffort-pod0d4545ae_536b_4870_aaca_ae5eb4b59382.slice - libcontainer container kubepods-besteffort-pod0d4545ae_536b_4870_aaca_ae5eb4b59382.slice. Sep 5 14:30:29.940958 kubelet[2893]: I0905 14:30:29.940932 2893 topology_manager.go:215] "Topology Admit Handler" podUID="64d4147d-f3e0-47f6-8d30-48f060c6bb14" podNamespace="calico-system" podName="calico-node-9jdf9" Sep 5 14:30:29.944722 systemd[1]: Created slice kubepods-besteffort-pod64d4147d_f3e0_47f6_8d30_48f060c6bb14.slice - libcontainer container kubepods-besteffort-pod64d4147d_f3e0_47f6_8d30_48f060c6bb14.slice. Sep 5 14:30:30.011626 kubelet[2893]: I0905 14:30:30.011555 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64d4147d-f3e0-47f6-8d30-48f060c6bb14-node-certs\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.011987 kubelet[2893]: I0905 14:30:30.011783 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d4545ae-536b-4870-aaca-ae5eb4b59382-typha-certs\") pod \"calico-typha-675fd9f996-nw8j4\" (UID: \"0d4545ae-536b-4870-aaca-ae5eb4b59382\") " pod="calico-system/calico-typha-675fd9f996-nw8j4" Sep 5 14:30:30.011987 kubelet[2893]: I0905 14:30:30.011950 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtxqr\" (UniqueName: \"kubernetes.io/projected/0d4545ae-536b-4870-aaca-ae5eb4b59382-kube-api-access-xtxqr\") pod \"calico-typha-675fd9f996-nw8j4\" (UID: \"0d4545ae-536b-4870-aaca-ae5eb4b59382\") " pod="calico-system/calico-typha-675fd9f996-nw8j4" Sep 5 14:30:30.012364 kubelet[2893]: I0905 14:30:30.012085 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-lib-modules\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.012364 kubelet[2893]: I0905 14:30:30.012191 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-xtables-lock\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.012682 kubelet[2893]: I0905 14:30:30.012379 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d4147d-f3e0-47f6-8d30-48f060c6bb14-tigera-ca-bundle\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.012682 kubelet[2893]: I0905 14:30:30.012498 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-var-lib-calico\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.012682 kubelet[2893]: I0905 14:30:30.012599 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-cni-net-dir\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.013139 kubelet[2893]: I0905 14:30:30.012709 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-cni-bin-dir\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.013139 kubelet[2893]: I0905 14:30:30.012805 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-cni-log-dir\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.013139 kubelet[2893]: I0905 14:30:30.012909 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4545ae-536b-4870-aaca-ae5eb4b59382-tigera-ca-bundle\") pod \"calico-typha-675fd9f996-nw8j4\" (UID: \"0d4545ae-536b-4870-aaca-ae5eb4b59382\") " pod="calico-system/calico-typha-675fd9f996-nw8j4" Sep 5 14:30:30.013139 kubelet[2893]: I0905 14:30:30.013009 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4dsf\" (UniqueName: \"kubernetes.io/projected/64d4147d-f3e0-47f6-8d30-48f060c6bb14-kube-api-access-c4dsf\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.013139 kubelet[2893]: I0905 14:30:30.013126 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-policysync\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.013801 kubelet[2893]: I0905 14:30:30.013195 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-var-run-calico\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.013801 kubelet[2893]: I0905 14:30:30.013372 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64d4147d-f3e0-47f6-8d30-48f060c6bb14-flexvol-driver-host\") pod \"calico-node-9jdf9\" (UID: \"64d4147d-f3e0-47f6-8d30-48f060c6bb14\") " pod="calico-system/calico-node-9jdf9" Sep 5 14:30:30.070473 kubelet[2893]: I0905 14:30:30.070405 2893 topology_manager.go:215] "Topology Admit Handler" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" podNamespace="calico-system" podName="csi-node-driver-qv5zz" Sep 5 14:30:30.071441 kubelet[2893]: E0905 14:30:30.071356 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qv5zz" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" Sep 5 14:30:30.114496 kubelet[2893]: I0905 14:30:30.114468 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06f180ad-ca06-48f1-b9ab-6c62014854a5-socket-dir\") pod \"csi-node-driver-qv5zz\" (UID: \"06f180ad-ca06-48f1-b9ab-6c62014854a5\") " pod="calico-system/csi-node-driver-qv5zz" Sep 5 14:30:30.114636 kubelet[2893]: I0905 14:30:30.114577 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06f180ad-ca06-48f1-b9ab-6c62014854a5-kubelet-dir\") pod \"csi-node-driver-qv5zz\" (UID: \"06f180ad-ca06-48f1-b9ab-6c62014854a5\") " pod="calico-system/csi-node-driver-qv5zz" Sep 5 14:30:30.114713 kubelet[2893]: I0905 14:30:30.114695 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06f180ad-ca06-48f1-b9ab-6c62014854a5-registration-dir\") pod \"csi-node-driver-qv5zz\" (UID: \"06f180ad-ca06-48f1-b9ab-6c62014854a5\") " pod="calico-system/csi-node-driver-qv5zz" Sep 5 14:30:30.115029 kubelet[2893]: E0905 14:30:30.115011 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.115089 kubelet[2893]: W0905 14:30:30.115030 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.115089 kubelet[2893]: E0905 14:30:30.115060 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.115271 kubelet[2893]: E0905 14:30:30.115260 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.115323 kubelet[2893]: W0905 14:30:30.115273 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.115323 kubelet[2893]: E0905 14:30:30.115313 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.115608 kubelet[2893]: E0905 14:30:30.115568 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.115608 kubelet[2893]: W0905 14:30:30.115582 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.115608 kubelet[2893]: E0905 14:30:30.115605 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.115938 kubelet[2893]: E0905 14:30:30.115890 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.115938 kubelet[2893]: W0905 14:30:30.115902 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.115938 kubelet[2893]: E0905 14:30:30.115922 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.116120 kubelet[2893]: E0905 14:30:30.116109 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.116120 kubelet[2893]: W0905 14:30:30.116118 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.116221 kubelet[2893]: E0905 14:30:30.116148 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.116299 kubelet[2893]: E0905 14:30:30.116278 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.116299 kubelet[2893]: W0905 14:30:30.116294 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.116394 kubelet[2893]: E0905 14:30:30.116311 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.116492 kubelet[2893]: E0905 14:30:30.116481 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.116492 kubelet[2893]: W0905 14:30:30.116490 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.116580 kubelet[2893]: E0905 14:30:30.116511 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.116710 kubelet[2893]: E0905 14:30:30.116699 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.116710 kubelet[2893]: W0905 14:30:30.116708 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.116803 kubelet[2893]: E0905 14:30:30.116721 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.116959 kubelet[2893]: E0905 14:30:30.116940 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.116959 kubelet[2893]: W0905 14:30:30.116957 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.117077 kubelet[2893]: E0905 14:30:30.116984 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.117187 kubelet[2893]: E0905 14:30:30.117174 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.117187 kubelet[2893]: W0905 14:30:30.117186 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.117326 kubelet[2893]: E0905 14:30:30.117204 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.117418 kubelet[2893]: E0905 14:30:30.117407 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.117418 kubelet[2893]: W0905 14:30:30.117416 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.117556 kubelet[2893]: E0905 14:30:30.117431 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.117626 kubelet[2893]: E0905 14:30:30.117614 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.117626 kubelet[2893]: W0905 14:30:30.117623 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.117716 kubelet[2893]: E0905 14:30:30.117636 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.117829 kubelet[2893]: E0905 14:30:30.117811 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.117829 kubelet[2893]: W0905 14:30:30.117824 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.117933 kubelet[2893]: E0905 14:30:30.117844 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.118021 kubelet[2893]: E0905 14:30:30.118011 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.118021 kubelet[2893]: W0905 14:30:30.118020 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.118120 kubelet[2893]: E0905 14:30:30.118035 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.118120 kubelet[2893]: I0905 14:30:30.118056 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4jx\" (UniqueName: \"kubernetes.io/projected/06f180ad-ca06-48f1-b9ab-6c62014854a5-kube-api-access-5j4jx\") pod \"csi-node-driver-qv5zz\" (UID: \"06f180ad-ca06-48f1-b9ab-6c62014854a5\") " pod="calico-system/csi-node-driver-qv5zz" Sep 5 14:30:30.118251 kubelet[2893]: E0905 14:30:30.118238 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.118251 kubelet[2893]: W0905 14:30:30.118251 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.118345 kubelet[2893]: E0905 14:30:30.118298 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.118443 kubelet[2893]: E0905 14:30:30.118432 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.118480 kubelet[2893]: W0905 14:30:30.118445 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.118514 kubelet[2893]: E0905 14:30:30.118480 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.118606 kubelet[2893]: E0905 14:30:30.118596 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.118654 kubelet[2893]: W0905 14:30:30.118608 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.118654 kubelet[2893]: E0905 14:30:30.118634 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.118748 kubelet[2893]: E0905 14:30:30.118740 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.118748 kubelet[2893]: W0905 14:30:30.118747 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.118826 kubelet[2893]: E0905 14:30:30.118777 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.118891 kubelet[2893]: E0905 14:30:30.118883 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.118891 kubelet[2893]: W0905 14:30:30.118890 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.118963 kubelet[2893]: E0905 14:30:30.118905 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.119053 kubelet[2893]: E0905 14:30:30.119045 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.119085 kubelet[2893]: W0905 14:30:30.119053 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.119085 kubelet[2893]: E0905 14:30:30.119070 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.119230 kubelet[2893]: E0905 14:30:30.119219 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.119267 kubelet[2893]: W0905 14:30:30.119230 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.119321 kubelet[2893]: E0905 14:30:30.119264 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.119417 kubelet[2893]: E0905 14:30:30.119407 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.119417 kubelet[2893]: W0905 14:30:30.119416 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.119491 kubelet[2893]: E0905 14:30:30.119447 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.119582 kubelet[2893]: E0905 14:30:30.119573 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.119582 kubelet[2893]: W0905 14:30:30.119581 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.119664 kubelet[2893]: E0905 14:30:30.119606 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.119771 kubelet[2893]: E0905 14:30:30.119750 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.119771 kubelet[2893]: W0905 14:30:30.119766 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.119912 kubelet[2893]: E0905 14:30:30.119792 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.119981 kubelet[2893]: E0905 14:30:30.119926 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.119981 kubelet[2893]: W0905 14:30:30.119935 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.119981 kubelet[2893]: E0905 14:30:30.119959 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.120158 kubelet[2893]: E0905 14:30:30.120090 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.120158 kubelet[2893]: W0905 14:30:30.120098 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.120158 kubelet[2893]: E0905 14:30:30.120121 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.120348 kubelet[2893]: E0905 14:30:30.120257 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.120348 kubelet[2893]: W0905 14:30:30.120265 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.120348 kubelet[2893]: E0905 14:30:30.120280 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.120523 kubelet[2893]: E0905 14:30:30.120423 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.120523 kubelet[2893]: W0905 14:30:30.120431 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.120523 kubelet[2893]: E0905 14:30:30.120446 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.120694 kubelet[2893]: E0905 14:30:30.120578 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.120694 kubelet[2893]: W0905 14:30:30.120586 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.120694 kubelet[2893]: E0905 14:30:30.120600 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.120913 kubelet[2893]: E0905 14:30:30.120771 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.120913 kubelet[2893]: W0905 14:30:30.120778 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.120913 kubelet[2893]: E0905 14:30:30.120803 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.121244 kubelet[2893]: E0905 14:30:30.120915 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.121244 kubelet[2893]: W0905 14:30:30.120927 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.121244 kubelet[2893]: E0905 14:30:30.120950 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.121244 kubelet[2893]: E0905 14:30:30.121078 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.121244 kubelet[2893]: W0905 14:30:30.121087 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.121244 kubelet[2893]: E0905 14:30:30.121109 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.121244 kubelet[2893]: E0905 14:30:30.121238 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.121640 kubelet[2893]: W0905 14:30:30.121249 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.121640 kubelet[2893]: E0905 14:30:30.121270 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.121640 kubelet[2893]: E0905 14:30:30.121490 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.121640 kubelet[2893]: W0905 14:30:30.121499 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.121640 kubelet[2893]: E0905 14:30:30.121514 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.121922 kubelet[2893]: E0905 14:30:30.121673 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.121922 kubelet[2893]: W0905 14:30:30.121682 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.121922 kubelet[2893]: E0905 14:30:30.121696 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.121922 kubelet[2893]: E0905 14:30:30.121848 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.121922 kubelet[2893]: W0905 14:30:30.121855 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.121922 kubelet[2893]: E0905 14:30:30.121868 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.121922 kubelet[2893]: I0905 14:30:30.121890 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06f180ad-ca06-48f1-b9ab-6c62014854a5-varrun\") pod \"csi-node-driver-qv5zz\" (UID: \"06f180ad-ca06-48f1-b9ab-6c62014854a5\") " pod="calico-system/csi-node-driver-qv5zz" Sep 5 14:30:30.122195 kubelet[2893]: E0905 14:30:30.122035 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.122195 kubelet[2893]: W0905 14:30:30.122044 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.122195 kubelet[2893]: E0905 14:30:30.122058 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.122305 kubelet[2893]: E0905 14:30:30.122220 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.122305 kubelet[2893]: W0905 14:30:30.122230 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.122305 kubelet[2893]: E0905 14:30:30.122256 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.122404 kubelet[2893]: E0905 14:30:30.122395 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.122437 kubelet[2893]: W0905 14:30:30.122407 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.122473 kubelet[2893]: E0905 14:30:30.122436 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.122601 kubelet[2893]: E0905 14:30:30.122588 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.122646 kubelet[2893]: W0905 14:30:30.122601 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.122646 kubelet[2893]: E0905 14:30:30.122628 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.122765 kubelet[2893]: E0905 14:30:30.122755 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.122800 kubelet[2893]: W0905 14:30:30.122767 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.122800 kubelet[2893]: E0905 14:30:30.122787 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.122957 kubelet[2893]: E0905 14:30:30.122948 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.122995 kubelet[2893]: W0905 14:30:30.122959 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.122995 kubelet[2893]: E0905 14:30:30.122979 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.123152 kubelet[2893]: E0905 14:30:30.123143 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.123185 kubelet[2893]: W0905 14:30:30.123154 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.123223 kubelet[2893]: E0905 14:30:30.123189 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.123343 kubelet[2893]: E0905 14:30:30.123330 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.123343 kubelet[2893]: W0905 14:30:30.123339 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.123426 kubelet[2893]: E0905 14:30:30.123368 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.123511 kubelet[2893]: E0905 14:30:30.123500 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.123555 kubelet[2893]: W0905 14:30:30.123511 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.123555 kubelet[2893]: E0905 14:30:30.123540 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.123677 kubelet[2893]: E0905 14:30:30.123668 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.123714 kubelet[2893]: W0905 14:30:30.123677 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.123714 kubelet[2893]: E0905 14:30:30.123698 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.123832 kubelet[2893]: E0905 14:30:30.123825 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.123871 kubelet[2893]: W0905 14:30:30.123832 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.123871 kubelet[2893]: E0905 14:30:30.123851 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.123976 kubelet[2893]: E0905 14:30:30.123967 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.123976 kubelet[2893]: W0905 14:30:30.123975 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.124049 kubelet[2893]: E0905 14:30:30.123993 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.124123 kubelet[2893]: E0905 14:30:30.124115 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.124155 kubelet[2893]: W0905 14:30:30.124123 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.124155 kubelet[2893]: E0905 14:30:30.124137 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.124279 kubelet[2893]: E0905 14:30:30.124271 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.124279 kubelet[2893]: W0905 14:30:30.124279 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.124355 kubelet[2893]: E0905 14:30:30.124303 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.124477 kubelet[2893]: E0905 14:30:30.124467 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.124510 kubelet[2893]: W0905 14:30:30.124477 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.124510 kubelet[2893]: E0905 14:30:30.124494 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.124651 kubelet[2893]: E0905 14:30:30.124643 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.124684 kubelet[2893]: W0905 14:30:30.124651 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.124684 kubelet[2893]: E0905 14:30:30.124665 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.124949 kubelet[2893]: E0905 14:30:30.124937 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.124993 kubelet[2893]: W0905 14:30:30.124951 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.124993 kubelet[2893]: E0905 14:30:30.124974 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.125173 kubelet[2893]: E0905 14:30:30.125162 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.125207 kubelet[2893]: W0905 14:30:30.125175 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.125207 kubelet[2893]: E0905 14:30:30.125195 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.126185 kubelet[2893]: E0905 14:30:30.126173 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.126185 kubelet[2893]: W0905 14:30:30.126184 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.126255 kubelet[2893]: E0905 14:30:30.126203 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.132256 kubelet[2893]: E0905 14:30:30.132236 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.132256 kubelet[2893]: W0905 14:30:30.132252 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.132384 kubelet[2893]: E0905 14:30:30.132276 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.223557 kubelet[2893]: E0905 14:30:30.223509 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.223557 kubelet[2893]: W0905 14:30:30.223527 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.223557 kubelet[2893]: E0905 14:30:30.223548 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.223805 kubelet[2893]: E0905 14:30:30.223772 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.223805 kubelet[2893]: W0905 14:30:30.223784 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.223805 kubelet[2893]: E0905 14:30:30.223801 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.224040 kubelet[2893]: E0905 14:30:30.224025 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.224040 kubelet[2893]: W0905 14:30:30.224037 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.224163 kubelet[2893]: E0905 14:30:30.224057 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.224257 kubelet[2893]: E0905 14:30:30.224246 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.224257 kubelet[2893]: W0905 14:30:30.224255 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.224392 kubelet[2893]: E0905 14:30:30.224269 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.224541 kubelet[2893]: E0905 14:30:30.224524 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.224605 kubelet[2893]: W0905 14:30:30.224542 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.224605 kubelet[2893]: E0905 14:30:30.224568 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.224764 kubelet[2893]: E0905 14:30:30.224753 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.224764 kubelet[2893]: W0905 14:30:30.224763 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.224891 kubelet[2893]: E0905 14:30:30.224781 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.224957 kubelet[2893]: E0905 14:30:30.224944 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.225012 kubelet[2893]: W0905 14:30:30.224959 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.225012 kubelet[2893]: E0905 14:30:30.224981 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.225197 kubelet[2893]: E0905 14:30:30.225185 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.225254 kubelet[2893]: W0905 14:30:30.225197 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.225254 kubelet[2893]: E0905 14:30:30.225225 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.225383 kubelet[2893]: E0905 14:30:30.225369 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.225383 kubelet[2893]: W0905 14:30:30.225382 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.225497 kubelet[2893]: E0905 14:30:30.225411 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.225562 kubelet[2893]: E0905 14:30:30.225551 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.225601 kubelet[2893]: W0905 14:30:30.225563 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.225601 kubelet[2893]: E0905 14:30:30.225590 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.225740 kubelet[2893]: E0905 14:30:30.225727 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.225783 kubelet[2893]: W0905 14:30:30.225739 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.225783 kubelet[2893]: E0905 14:30:30.225761 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.225908 kubelet[2893]: E0905 14:30:30.225898 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.225908 kubelet[2893]: W0905 14:30:30.225907 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.225988 kubelet[2893]: E0905 14:30:30.225943 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.226075 kubelet[2893]: E0905 14:30:30.226065 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.226075 kubelet[2893]: W0905 14:30:30.226073 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.226141 kubelet[2893]: E0905 14:30:30.226087 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.226270 kubelet[2893]: E0905 14:30:30.226260 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.226270 kubelet[2893]: W0905 14:30:30.226269 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.226372 kubelet[2893]: E0905 14:30:30.226282 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.226456 kubelet[2893]: E0905 14:30:30.226446 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.226501 kubelet[2893]: W0905 14:30:30.226456 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.226501 kubelet[2893]: E0905 14:30:30.226473 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.226636 kubelet[2893]: E0905 14:30:30.226627 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.226636 kubelet[2893]: W0905 14:30:30.226636 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.226702 kubelet[2893]: E0905 14:30:30.226649 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.226836 kubelet[2893]: E0905 14:30:30.226825 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.226836 kubelet[2893]: W0905 14:30:30.226834 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.226952 kubelet[2893]: E0905 14:30:30.226850 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.227029 kubelet[2893]: E0905 14:30:30.227019 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.227029 kubelet[2893]: W0905 14:30:30.227027 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.227136 kubelet[2893]: E0905 14:30:30.227041 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.227199 kubelet[2893]: E0905 14:30:30.227183 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.227199 kubelet[2893]: W0905 14:30:30.227191 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.227322 kubelet[2893]: E0905 14:30:30.227220 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.227376 kubelet[2893]: E0905 14:30:30.227343 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.227376 kubelet[2893]: W0905 14:30:30.227351 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.227376 kubelet[2893]: E0905 14:30:30.227372 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.227554 kubelet[2893]: E0905 14:30:30.227512 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.227554 kubelet[2893]: W0905 14:30:30.227522 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.227554 kubelet[2893]: E0905 14:30:30.227536 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.227714 kubelet[2893]: E0905 14:30:30.227689 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.227714 kubelet[2893]: W0905 14:30:30.227697 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.227714 kubelet[2893]: E0905 14:30:30.227710 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.227878 kubelet[2893]: E0905 14:30:30.227866 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.227878 kubelet[2893]: W0905 14:30:30.227874 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.227988 kubelet[2893]: E0905 14:30:30.227887 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.228859 kubelet[2893]: E0905 14:30:30.228218 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.228952 kubelet[2893]: W0905 14:30:30.228886 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.228952 kubelet[2893]: E0905 14:30:30.228922 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.229197 kubelet[2893]: E0905 14:30:30.229186 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.229197 kubelet[2893]: W0905 14:30:30.229197 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.229276 kubelet[2893]: E0905 14:30:30.229210 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.233628 containerd[1548]: time="2024-09-05T14:30:30.233584189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675fd9f996-nw8j4,Uid:0d4545ae-536b-4870-aaca-ae5eb4b59382,Namespace:calico-system,Attempt:0,}" Sep 5 14:30:30.235485 kubelet[2893]: E0905 14:30:30.235474 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 14:30:30.235485 kubelet[2893]: W0905 14:30:30.235481 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 14:30:30.235967 kubelet[2893]: E0905 14:30:30.235493 2893 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 14:30:30.243397 containerd[1548]: time="2024-09-05T14:30:30.243350031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:30.243585 containerd[1548]: time="2024-09-05T14:30:30.243563562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:30.243615 containerd[1548]: time="2024-09-05T14:30:30.243580060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:30.243665 containerd[1548]: time="2024-09-05T14:30:30.243648825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:30.246911 containerd[1548]: time="2024-09-05T14:30:30.246883246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jdf9,Uid:64d4147d-f3e0-47f6-8d30-48f060c6bb14,Namespace:calico-system,Attempt:0,}" Sep 5 14:30:30.255900 containerd[1548]: time="2024-09-05T14:30:30.255851059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:30.255900 containerd[1548]: time="2024-09-05T14:30:30.255883011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:30.255999 containerd[1548]: time="2024-09-05T14:30:30.255906513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:30.256195 containerd[1548]: time="2024-09-05T14:30:30.256147694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:30.257445 systemd[1]: Started cri-containerd-f1d3cb4d99721b67161a9a15269acf2eea42a7c7ea99268cc624bdc4d9584fdf.scope - libcontainer container f1d3cb4d99721b67161a9a15269acf2eea42a7c7ea99268cc624bdc4d9584fdf. Sep 5 14:30:30.261978 systemd[1]: Started cri-containerd-c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b.scope - libcontainer container c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b. Sep 5 14:30:30.271900 containerd[1548]: time="2024-09-05T14:30:30.271846134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jdf9,Uid:64d4147d-f3e0-47f6-8d30-48f060c6bb14,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b\"" Sep 5 14:30:30.272679 containerd[1548]: time="2024-09-05T14:30:30.272668764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 5 14:30:30.281690 containerd[1548]: time="2024-09-05T14:30:30.281669742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675fd9f996-nw8j4,Uid:0d4545ae-536b-4870-aaca-ae5eb4b59382,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1d3cb4d99721b67161a9a15269acf2eea42a7c7ea99268cc624bdc4d9584fdf\"" Sep 5 14:30:31.183845 kubelet[2893]: E0905 14:30:31.183743 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qv5zz" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" Sep 5 14:30:31.893393 containerd[1548]: time="2024-09-05T14:30:31.893352000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:31.893733 containerd[1548]: time="2024-09-05T14:30:31.893482481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 5 14:30:31.894025 containerd[1548]: time="2024-09-05T14:30:31.894009554Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:31.895193 containerd[1548]: time="2024-09-05T14:30:31.895181122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:31.895754 containerd[1548]: time="2024-09-05T14:30:31.895736315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.623032228s" Sep 5 14:30:31.895831 containerd[1548]: time="2024-09-05T14:30:31.895782113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 5 14:30:31.896205 containerd[1548]: time="2024-09-05T14:30:31.896192556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 5 14:30:31.896822 containerd[1548]: time="2024-09-05T14:30:31.896810594Z" level=info msg="CreateContainer within sandbox \"c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 14:30:31.910890 containerd[1548]: time="2024-09-05T14:30:31.910867381Z" level=info msg="CreateContainer within sandbox \"c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e\"" Sep 5 14:30:31.911213 containerd[1548]: time="2024-09-05T14:30:31.911202466Z" level=info msg="StartContainer for \"4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e\"" Sep 5 14:30:31.930419 systemd[1]: Started cri-containerd-4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e.scope - libcontainer container 4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e. Sep 5 14:30:31.943022 containerd[1548]: time="2024-09-05T14:30:31.942996666Z" level=info msg="StartContainer for \"4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e\" returns successfully" Sep 5 14:30:31.947963 systemd[1]: cri-containerd-4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e.scope: Deactivated successfully. Sep 5 14:30:32.122248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e-rootfs.mount: Deactivated successfully. Sep 5 14:30:32.197166 containerd[1548]: time="2024-09-05T14:30:32.197066057Z" level=info msg="shim disconnected" id=4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e namespace=k8s.io Sep 5 14:30:32.197166 containerd[1548]: time="2024-09-05T14:30:32.197099931Z" level=warning msg="cleaning up after shim disconnected" id=4b733f05fbd0943aeaf45ab76bc184182bf95c11151e36c4882fd9302b153f8e namespace=k8s.io Sep 5 14:30:32.197166 containerd[1548]: time="2024-09-05T14:30:32.197105882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 14:30:33.184259 kubelet[2893]: E0905 14:30:33.184201 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qv5zz" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" Sep 5 14:30:33.991140 containerd[1548]: time="2024-09-05T14:30:33.991084210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:33.991357 containerd[1548]: time="2024-09-05T14:30:33.991291134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 5 14:30:33.991663 containerd[1548]: time="2024-09-05T14:30:33.991621965Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:33.992527 containerd[1548]: time="2024-09-05T14:30:33.992487616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:33.993177 containerd[1548]: time="2024-09-05T14:30:33.993135415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.096924766s" Sep 5 14:30:33.993177 containerd[1548]: time="2024-09-05T14:30:33.993149907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 5 14:30:33.993486 containerd[1548]: time="2024-09-05T14:30:33.993472940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 5 14:30:33.996630 containerd[1548]: time="2024-09-05T14:30:33.996584671Z" level=info msg="CreateContainer within sandbox \"f1d3cb4d99721b67161a9a15269acf2eea42a7c7ea99268cc624bdc4d9584fdf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 14:30:34.002819 containerd[1548]: time="2024-09-05T14:30:34.002775565Z" level=info msg="CreateContainer within sandbox \"f1d3cb4d99721b67161a9a15269acf2eea42a7c7ea99268cc624bdc4d9584fdf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1ce4d7fa2a41cbaba82996e8a53af49756707f9838568228de656047cd963e55\"" Sep 5 14:30:34.003063 containerd[1548]: time="2024-09-05T14:30:34.003031085Z" level=info msg="StartContainer for \"1ce4d7fa2a41cbaba82996e8a53af49756707f9838568228de656047cd963e55\"" Sep 5 14:30:34.020860 systemd[1]: Started cri-containerd-1ce4d7fa2a41cbaba82996e8a53af49756707f9838568228de656047cd963e55.scope - libcontainer container 1ce4d7fa2a41cbaba82996e8a53af49756707f9838568228de656047cd963e55. Sep 5 14:30:34.043866 containerd[1548]: time="2024-09-05T14:30:34.043843824Z" level=info msg="StartContainer for \"1ce4d7fa2a41cbaba82996e8a53af49756707f9838568228de656047cd963e55\" returns successfully" Sep 5 14:30:34.267692 kubelet[2893]: I0905 14:30:34.267500 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-675fd9f996-nw8j4" podStartSLOduration=1.556250315 podCreationTimestamp="2024-09-05 14:30:29 +0000 UTC" firstStartedPulling="2024-09-05 14:30:30.282191068 +0000 UTC m=+18.144349706" lastFinishedPulling="2024-09-05 14:30:33.993353321 +0000 UTC m=+21.855511961" observedRunningTime="2024-09-05 14:30:34.267009666 +0000 UTC m=+22.129168420" watchObservedRunningTime="2024-09-05 14:30:34.26741257 +0000 UTC m=+22.129571260" Sep 5 14:30:35.184039 kubelet[2893]: E0905 14:30:35.183977 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qv5zz" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" Sep 5 14:30:35.250794 kubelet[2893]: I0905 14:30:35.250714 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 14:30:37.183646 kubelet[2893]: E0905 14:30:37.183585 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qv5zz" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" Sep 5 14:30:38.003015 containerd[1548]: time="2024-09-05T14:30:38.002960141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:38.003229 containerd[1548]: time="2024-09-05T14:30:38.003182667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 5 14:30:38.003483 containerd[1548]: time="2024-09-05T14:30:38.003431702Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:38.004578 containerd[1548]: time="2024-09-05T14:30:38.004536590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:38.004960 containerd[1548]: time="2024-09-05T14:30:38.004917671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.011427889s" Sep 5 14:30:38.004960 containerd[1548]: time="2024-09-05T14:30:38.004930977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 5 14:30:38.005843 containerd[1548]: time="2024-09-05T14:30:38.005830294Z" level=info msg="CreateContainer within sandbox \"c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 14:30:38.010359 containerd[1548]: time="2024-09-05T14:30:38.010317114Z" level=info msg="CreateContainer within sandbox \"c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e\"" Sep 5 14:30:38.010512 containerd[1548]: time="2024-09-05T14:30:38.010498329Z" level=info msg="StartContainer for \"cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e\"" Sep 5 14:30:38.042686 systemd[1]: Started cri-containerd-cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e.scope - libcontainer container cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e. Sep 5 14:30:38.058278 containerd[1548]: time="2024-09-05T14:30:38.058254108Z" level=info msg="StartContainer for \"cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e\" returns successfully" Sep 5 14:30:38.582674 systemd[1]: cri-containerd-cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e.scope: Deactivated successfully. Sep 5 14:30:38.592234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e-rootfs.mount: Deactivated successfully. Sep 5 14:30:38.613596 kubelet[2893]: I0905 14:30:38.613579 2893 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 5 14:30:38.624877 kubelet[2893]: I0905 14:30:38.624812 2893 topology_manager.go:215] "Topology Admit Handler" podUID="7663138d-98ce-474d-8150-2dfcb2cc2b36" podNamespace="kube-system" podName="coredns-5dd5756b68-q7gtn" Sep 5 14:30:38.625564 kubelet[2893]: I0905 14:30:38.625546 2893 topology_manager.go:215] "Topology Admit Handler" podUID="f2341112-c29f-4ee3-9ea4-9958e7ec9922" podNamespace="kube-system" podName="coredns-5dd5756b68-c9mcx" Sep 5 14:30:38.625941 kubelet[2893]: I0905 14:30:38.625925 2893 topology_manager.go:215] "Topology Admit Handler" podUID="f22fbe09-b2ac-45e8-af2d-e6cf0981b234" podNamespace="calico-system" podName="calico-kube-controllers-57555fbd67-8z6w4" Sep 5 14:30:38.630448 systemd[1]: Created slice kubepods-burstable-pod7663138d_98ce_474d_8150_2dfcb2cc2b36.slice - libcontainer container kubepods-burstable-pod7663138d_98ce_474d_8150_2dfcb2cc2b36.slice. Sep 5 14:30:38.636062 systemd[1]: Created slice kubepods-burstable-podf2341112_c29f_4ee3_9ea4_9958e7ec9922.slice - libcontainer container kubepods-burstable-podf2341112_c29f_4ee3_9ea4_9958e7ec9922.slice. Sep 5 14:30:38.641662 systemd[1]: Created slice kubepods-besteffort-podf22fbe09_b2ac_45e8_af2d_e6cf0981b234.slice - libcontainer container kubepods-besteffort-podf22fbe09_b2ac_45e8_af2d_e6cf0981b234.slice. Sep 5 14:30:38.680147 kubelet[2893]: I0905 14:30:38.680083 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdj9p\" (UniqueName: \"kubernetes.io/projected/f22fbe09-b2ac-45e8-af2d-e6cf0981b234-kube-api-access-qdj9p\") pod \"calico-kube-controllers-57555fbd67-8z6w4\" (UID: \"f22fbe09-b2ac-45e8-af2d-e6cf0981b234\") " pod="calico-system/calico-kube-controllers-57555fbd67-8z6w4" Sep 5 14:30:38.680425 kubelet[2893]: I0905 14:30:38.680189 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f22fbe09-b2ac-45e8-af2d-e6cf0981b234-tigera-ca-bundle\") pod \"calico-kube-controllers-57555fbd67-8z6w4\" (UID: \"f22fbe09-b2ac-45e8-af2d-e6cf0981b234\") " pod="calico-system/calico-kube-controllers-57555fbd67-8z6w4" Sep 5 14:30:38.680600 kubelet[2893]: I0905 14:30:38.680447 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7663138d-98ce-474d-8150-2dfcb2cc2b36-config-volume\") pod \"coredns-5dd5756b68-q7gtn\" (UID: \"7663138d-98ce-474d-8150-2dfcb2cc2b36\") " pod="kube-system/coredns-5dd5756b68-q7gtn" Sep 5 14:30:38.680600 kubelet[2893]: I0905 14:30:38.680583 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljzn\" (UniqueName: \"kubernetes.io/projected/7663138d-98ce-474d-8150-2dfcb2cc2b36-kube-api-access-6ljzn\") pod \"coredns-5dd5756b68-q7gtn\" (UID: \"7663138d-98ce-474d-8150-2dfcb2cc2b36\") " pod="kube-system/coredns-5dd5756b68-q7gtn" Sep 5 14:30:38.680952 kubelet[2893]: I0905 14:30:38.680798 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2341112-c29f-4ee3-9ea4-9958e7ec9922-config-volume\") pod \"coredns-5dd5756b68-c9mcx\" (UID: \"f2341112-c29f-4ee3-9ea4-9958e7ec9922\") " pod="kube-system/coredns-5dd5756b68-c9mcx" Sep 5 14:30:38.681092 kubelet[2893]: I0905 14:30:38.680998 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzghr\" (UniqueName: \"kubernetes.io/projected/f2341112-c29f-4ee3-9ea4-9958e7ec9922-kube-api-access-xzghr\") pod \"coredns-5dd5756b68-c9mcx\" (UID: \"f2341112-c29f-4ee3-9ea4-9958e7ec9922\") " pod="kube-system/coredns-5dd5756b68-c9mcx" Sep 5 14:30:38.935381 containerd[1548]: time="2024-09-05T14:30:38.935111473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7gtn,Uid:7663138d-98ce-474d-8150-2dfcb2cc2b36,Namespace:kube-system,Attempt:0,}" Sep 5 14:30:38.940513 containerd[1548]: time="2024-09-05T14:30:38.940482586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c9mcx,Uid:f2341112-c29f-4ee3-9ea4-9958e7ec9922,Namespace:kube-system,Attempt:0,}" Sep 5 14:30:38.944132 containerd[1548]: time="2024-09-05T14:30:38.944099560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57555fbd67-8z6w4,Uid:f22fbe09-b2ac-45e8-af2d-e6cf0981b234,Namespace:calico-system,Attempt:0,}" Sep 5 14:30:39.193434 systemd[1]: Created slice kubepods-besteffort-pod06f180ad_ca06_48f1_b9ab_6c62014854a5.slice - libcontainer container kubepods-besteffort-pod06f180ad_ca06_48f1_b9ab_6c62014854a5.slice. Sep 5 14:30:39.195558 containerd[1548]: time="2024-09-05T14:30:39.195526192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qv5zz,Uid:06f180ad-ca06-48f1-b9ab-6c62014854a5,Namespace:calico-system,Attempt:0,}" Sep 5 14:30:39.252436 containerd[1548]: time="2024-09-05T14:30:39.252398195Z" level=info msg="shim disconnected" id=cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e namespace=k8s.io Sep 5 14:30:39.252436 containerd[1548]: time="2024-09-05T14:30:39.252431825Z" level=warning msg="cleaning up after shim disconnected" id=cb747e2a3ac84bfb88e1eca62b1211c8c5b589f5ee8ef971113387ce4cd4370e namespace=k8s.io Sep 5 14:30:39.252436 containerd[1548]: time="2024-09-05T14:30:39.252436971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 14:30:39.288983 containerd[1548]: time="2024-09-05T14:30:39.288936881Z" level=error msg="Failed to destroy network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289097 containerd[1548]: time="2024-09-05T14:30:39.289009069Z" level=error msg="Failed to destroy network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289187 containerd[1548]: time="2024-09-05T14:30:39.289173777Z" level=error msg="encountered an error cleaning up failed sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289226 containerd[1548]: time="2024-09-05T14:30:39.289191762Z" level=error msg="encountered an error cleaning up failed sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289226 containerd[1548]: time="2024-09-05T14:30:39.289203089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57555fbd67-8z6w4,Uid:f22fbe09-b2ac-45e8-af2d-e6cf0981b234,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289328 containerd[1548]: time="2024-09-05T14:30:39.289226070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qv5zz,Uid:06f180ad-ca06-48f1-b9ab-6c62014854a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289393 kubelet[2893]: E0905 14:30:39.289381 2893 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289454 kubelet[2893]: E0905 14:30:39.289422 2893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qv5zz" Sep 5 14:30:39.289454 kubelet[2893]: E0905 14:30:39.289436 2893 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qv5zz" Sep 5 14:30:39.289522 kubelet[2893]: E0905 14:30:39.289381 2893 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.289522 kubelet[2893]: E0905 14:30:39.289468 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qv5zz_calico-system(06f180ad-ca06-48f1-b9ab-6c62014854a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qv5zz_calico-system(06f180ad-ca06-48f1-b9ab-6c62014854a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qv5zz" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" Sep 5 14:30:39.289522 kubelet[2893]: E0905 14:30:39.289484 2893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57555fbd67-8z6w4" Sep 5 14:30:39.289640 kubelet[2893]: E0905 14:30:39.289504 2893 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57555fbd67-8z6w4" Sep 5 14:30:39.289640 kubelet[2893]: E0905 14:30:39.289539 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57555fbd67-8z6w4_calico-system(f22fbe09-b2ac-45e8-af2d-e6cf0981b234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57555fbd67-8z6w4_calico-system(f22fbe09-b2ac-45e8-af2d-e6cf0981b234)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57555fbd67-8z6w4" podUID="f22fbe09-b2ac-45e8-af2d-e6cf0981b234" Sep 5 14:30:39.290152 containerd[1548]: time="2024-09-05T14:30:39.290132540Z" level=error msg="Failed to destroy network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290313 containerd[1548]: time="2024-09-05T14:30:39.290297904Z" level=error msg="encountered an error cleaning up failed sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290357 containerd[1548]: time="2024-09-05T14:30:39.290323660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7gtn,Uid:7663138d-98ce-474d-8150-2dfcb2cc2b36,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290399 containerd[1548]: time="2024-09-05T14:30:39.290382876Z" level=error msg="Failed to destroy network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290459 kubelet[2893]: E0905 14:30:39.290447 2893 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290502 kubelet[2893]: E0905 14:30:39.290477 2893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-q7gtn" Sep 5 14:30:39.290502 kubelet[2893]: E0905 14:30:39.290490 2893 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-q7gtn" Sep 5 14:30:39.290567 kubelet[2893]: E0905 14:30:39.290515 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-q7gtn_kube-system(7663138d-98ce-474d-8150-2dfcb2cc2b36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-q7gtn_kube-system(7663138d-98ce-474d-8150-2dfcb2cc2b36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-q7gtn" podUID="7663138d-98ce-474d-8150-2dfcb2cc2b36" Sep 5 14:30:39.290659 containerd[1548]: time="2024-09-05T14:30:39.290546517Z" level=error msg="encountered an error cleaning up failed sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290659 containerd[1548]: time="2024-09-05T14:30:39.290567999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c9mcx,Uid:f2341112-c29f-4ee3-9ea4-9958e7ec9922,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290721 kubelet[2893]: E0905 14:30:39.290671 2893 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:39.290721 kubelet[2893]: E0905 14:30:39.290689 2893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c9mcx" Sep 5 14:30:39.290721 kubelet[2893]: E0905 14:30:39.290700 2893 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c9mcx" Sep 5 14:30:39.290706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253-shm.mount: Deactivated successfully. Sep 5 14:30:39.290884 kubelet[2893]: E0905 14:30:39.290723 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-c9mcx_kube-system(f2341112-c29f-4ee3-9ea4-9958e7ec9922)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-c9mcx_kube-system(f2341112-c29f-4ee3-9ea4-9958e7ec9922)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c9mcx" podUID="f2341112-c29f-4ee3-9ea4-9958e7ec9922" Sep 5 14:30:39.290795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8-shm.mount: Deactivated successfully. Sep 5 14:30:39.292529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd-shm.mount: Deactivated successfully. Sep 5 14:30:39.292594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce-shm.mount: Deactivated successfully. Sep 5 14:30:40.264911 kubelet[2893]: I0905 14:30:40.264843 2893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:30:40.266589 containerd[1548]: time="2024-09-05T14:30:40.266516825Z" level=info msg="StopPodSandbox for \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\"" Sep 5 14:30:40.267374 containerd[1548]: time="2024-09-05T14:30:40.266945868Z" level=info msg="Ensure that sandbox 6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253 in task-service has been cleanup successfully" Sep 5 14:30:40.267559 kubelet[2893]: I0905 14:30:40.267245 2893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:30:40.268587 containerd[1548]: time="2024-09-05T14:30:40.268489338Z" level=info msg="StopPodSandbox for \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\"" Sep 5 14:30:40.268996 containerd[1548]: time="2024-09-05T14:30:40.268918038Z" level=info msg="Ensure that sandbox 4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd in task-service has been cleanup successfully" Sep 5 14:30:40.270022 kubelet[2893]: I0905 14:30:40.269956 2893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:30:40.271155 containerd[1548]: time="2024-09-05T14:30:40.271067678Z" level=info msg="StopPodSandbox for \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\"" Sep 5 14:30:40.271661 containerd[1548]: time="2024-09-05T14:30:40.271582977Z" level=info msg="Ensure that sandbox 916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce in task-service has been cleanup successfully" Sep 5 14:30:40.272883 kubelet[2893]: I0905 14:30:40.272826 2893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:30:40.274062 containerd[1548]: time="2024-09-05T14:30:40.273986156Z" level=info msg="StopPodSandbox for \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\"" Sep 5 14:30:40.274547 containerd[1548]: time="2024-09-05T14:30:40.274467785Z" level=info msg="Ensure that sandbox 5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8 in task-service has been cleanup successfully" Sep 5 14:30:40.280618 containerd[1548]: time="2024-09-05T14:30:40.280572591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 5 14:30:40.292086 containerd[1548]: time="2024-09-05T14:30:40.292055608Z" level=error msg="StopPodSandbox for \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\" failed" error="failed to destroy network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:40.292169 containerd[1548]: time="2024-09-05T14:30:40.292097216Z" level=error msg="StopPodSandbox for \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\" failed" error="failed to destroy network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:40.292243 kubelet[2893]: E0905 14:30:40.292233 2893 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:30:40.292303 kubelet[2893]: E0905 14:30:40.292280 2893 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd"} Sep 5 14:30:40.292303 kubelet[2893]: E0905 14:30:40.292287 2893 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:30:40.292392 kubelet[2893]: E0905 14:30:40.292309 2893 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253"} Sep 5 14:30:40.292392 kubelet[2893]: E0905 14:30:40.292322 2893 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2341112-c29f-4ee3-9ea4-9958e7ec9922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 14:30:40.292392 kubelet[2893]: E0905 14:30:40.292340 2893 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06f180ad-ca06-48f1-b9ab-6c62014854a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 14:30:40.292392 kubelet[2893]: E0905 14:30:40.292351 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2341112-c29f-4ee3-9ea4-9958e7ec9922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c9mcx" podUID="f2341112-c29f-4ee3-9ea4-9958e7ec9922" Sep 5 14:30:40.292562 kubelet[2893]: E0905 14:30:40.292364 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06f180ad-ca06-48f1-b9ab-6c62014854a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qv5zz" podUID="06f180ad-ca06-48f1-b9ab-6c62014854a5" Sep 5 14:30:40.293442 containerd[1548]: time="2024-09-05T14:30:40.293404924Z" level=error msg="StopPodSandbox for \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\" failed" error="failed to destroy network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:40.293606 kubelet[2893]: E0905 14:30:40.293538 2893 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:30:40.293606 kubelet[2893]: E0905 14:30:40.293590 2893 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce"} Sep 5 14:30:40.293696 kubelet[2893]: E0905 14:30:40.293617 2893 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7663138d-98ce-474d-8150-2dfcb2cc2b36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 14:30:40.293696 kubelet[2893]: E0905 14:30:40.293656 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7663138d-98ce-474d-8150-2dfcb2cc2b36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-q7gtn" podUID="7663138d-98ce-474d-8150-2dfcb2cc2b36" Sep 5 14:30:40.294117 containerd[1548]: time="2024-09-05T14:30:40.294102730Z" level=error msg="StopPodSandbox for \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\" failed" error="failed to destroy network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 14:30:40.294192 kubelet[2893]: E0905 14:30:40.294186 2893 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:30:40.294217 kubelet[2893]: E0905 14:30:40.294198 2893 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8"} Sep 5 14:30:40.294236 kubelet[2893]: E0905 14:30:40.294217 2893 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f22fbe09-b2ac-45e8-af2d-e6cf0981b234\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 14:30:40.294264 kubelet[2893]: E0905 14:30:40.294241 2893 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f22fbe09-b2ac-45e8-af2d-e6cf0981b234\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57555fbd67-8z6w4" podUID="f22fbe09-b2ac-45e8-af2d-e6cf0981b234" Sep 5 14:30:44.599099 kubelet[2893]: I0905 14:30:44.599049 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 14:30:44.750740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035625297.mount: Deactivated successfully. Sep 5 14:30:44.766996 containerd[1548]: time="2024-09-05T14:30:44.766946923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:44.767166 containerd[1548]: time="2024-09-05T14:30:44.767078965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 5 14:30:44.767510 containerd[1548]: time="2024-09-05T14:30:44.767497606Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:44.768348 containerd[1548]: time="2024-09-05T14:30:44.768333327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:44.768731 containerd[1548]: time="2024-09-05T14:30:44.768716133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.48810077s" Sep 5 14:30:44.768776 containerd[1548]: time="2024-09-05T14:30:44.768731304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 5 14:30:44.772046 containerd[1548]: time="2024-09-05T14:30:44.772026469Z" level=info msg="CreateContainer within sandbox \"c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 14:30:44.777204 containerd[1548]: time="2024-09-05T14:30:44.777163574Z" level=info msg="CreateContainer within sandbox \"c8b0d4b1a3af1d767e0d24a9b0022fc1e953b638669e0dd5491a17d43c79836b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"77a8d3b6f72dd499184b5cdf820860d2044dbfc94692d08a6d28e8e481252d51\"" Sep 5 14:30:44.777419 containerd[1548]: time="2024-09-05T14:30:44.777407062Z" level=info msg="StartContainer for \"77a8d3b6f72dd499184b5cdf820860d2044dbfc94692d08a6d28e8e481252d51\"" Sep 5 14:30:44.802572 systemd[1]: Started cri-containerd-77a8d3b6f72dd499184b5cdf820860d2044dbfc94692d08a6d28e8e481252d51.scope - libcontainer container 77a8d3b6f72dd499184b5cdf820860d2044dbfc94692d08a6d28e8e481252d51. Sep 5 14:30:44.819432 containerd[1548]: time="2024-09-05T14:30:44.819398782Z" level=info msg="StartContainer for \"77a8d3b6f72dd499184b5cdf820860d2044dbfc94692d08a6d28e8e481252d51\" returns successfully" Sep 5 14:30:44.909402 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 14:30:44.909455 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 14:30:45.305000 kubelet[2893]: I0905 14:30:45.304975 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9jdf9" podStartSLOduration=1.808592698 podCreationTimestamp="2024-09-05 14:30:29 +0000 UTC" firstStartedPulling="2024-09-05 14:30:30.272525884 +0000 UTC m=+18.134684525" lastFinishedPulling="2024-09-05 14:30:44.768878747 +0000 UTC m=+32.631037385" observedRunningTime="2024-09-05 14:30:45.304690416 +0000 UTC m=+33.166849055" watchObservedRunningTime="2024-09-05 14:30:45.304945558 +0000 UTC m=+33.167104196" Sep 5 14:30:46.089302 kernel: bpftool[4358]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 14:30:46.241789 systemd-networkd[1338]: vxlan.calico: Link UP Sep 5 14:30:46.241792 systemd-networkd[1338]: vxlan.calico: Gained carrier Sep 5 14:30:47.289363 kubelet[2893]: I0905 14:30:47.289264 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 14:30:47.816747 systemd-networkd[1338]: vxlan.calico: Gained IPv6LL Sep 5 14:30:51.184474 containerd[1548]: time="2024-09-05T14:30:51.184339543Z" level=info msg="StopPodSandbox for \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\"" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.223 [INFO][4536] k8s.go 608: Cleaning up netns ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.223 [INFO][4536] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" iface="eth0" netns="/var/run/netns/cni-a5cd3b9c-ec15-bbc0-8f66-dc1ee06b9df2" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.223 [INFO][4536] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" iface="eth0" netns="/var/run/netns/cni-a5cd3b9c-ec15-bbc0-8f66-dc1ee06b9df2" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.223 [INFO][4536] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" iface="eth0" netns="/var/run/netns/cni-a5cd3b9c-ec15-bbc0-8f66-dc1ee06b9df2" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.223 [INFO][4536] k8s.go 615: Releasing IP address(es) ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.223 [INFO][4536] utils.go 188: Calico CNI releasing IP address ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.233 [INFO][4551] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.233 [INFO][4551] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.233 [INFO][4551] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.237 [WARNING][4551] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.237 [INFO][4551] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.238 [INFO][4551] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:51.240780 containerd[1548]: 2024-09-05 14:30:51.239 [INFO][4536] k8s.go 621: Teardown processing complete. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:30:51.241062 containerd[1548]: time="2024-09-05T14:30:51.240852740Z" level=info msg="TearDown network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\" successfully" Sep 5 14:30:51.241062 containerd[1548]: time="2024-09-05T14:30:51.240874441Z" level=info msg="StopPodSandbox for \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\" returns successfully" Sep 5 14:30:51.241347 containerd[1548]: time="2024-09-05T14:30:51.241290668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c9mcx,Uid:f2341112-c29f-4ee3-9ea4-9958e7ec9922,Namespace:kube-system,Attempt:1,}" Sep 5 14:30:51.242226 systemd[1]: run-netns-cni\x2da5cd3b9c\x2dec15\x2dbbc0\x2d8f66\x2ddc1ee06b9df2.mount: Deactivated successfully. Sep 5 14:30:51.295240 systemd-networkd[1338]: cali39d77e5b83d: Link UP Sep 5 14:30:51.295413 systemd-networkd[1338]: cali39d77e5b83d: Gained carrier Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.260 [INFO][4568] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0 coredns-5dd5756b68- kube-system f2341112-c29f-4ee3-9ea4-9958e7ec9922 695 0 2024-09-05 14:30:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-f4c57b7dbd coredns-5dd5756b68-c9mcx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali39d77e5b83d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.260 [INFO][4568] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.273 [INFO][4589] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" HandleID="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.279 [INFO][4589] ipam_plugin.go 270: Auto assigning IP ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" HandleID="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000519c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-f4c57b7dbd", "pod":"coredns-5dd5756b68-c9mcx", "timestamp":"2024-09-05 14:30:51.273891091 +0000 UTC"}, Hostname:"ci-4054.1.0-a-f4c57b7dbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.279 [INFO][4589] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.279 [INFO][4589] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.279 [INFO][4589] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-f4c57b7dbd' Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.280 [INFO][4589] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.283 [INFO][4589] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.285 [INFO][4589] ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.286 [INFO][4589] ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.287 [INFO][4589] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.287 [INFO][4589] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.288 [INFO][4589] ipam.go 1685: Creating new handle: k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137 Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.290 [INFO][4589] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.293 [INFO][4589] ipam.go 1216: Successfully claimed IPs: [192.168.16.65/26] block=192.168.16.64/26 handle="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.293 [INFO][4589] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.65/26] handle="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.293 [INFO][4589] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:51.301679 containerd[1548]: 2024-09-05 14:30:51.293 [INFO][4589] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.16.65/26] IPv6=[] ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" HandleID="k8s-pod-network.44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.302294 containerd[1548]: 2024-09-05 14:30:51.294 [INFO][4568] k8s.go 386: Populated endpoint ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f2341112-c29f-4ee3-9ea4-9958e7ec9922", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"", Pod:"coredns-5dd5756b68-c9mcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39d77e5b83d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:51.302294 containerd[1548]: 2024-09-05 14:30:51.294 [INFO][4568] k8s.go 387: Calico CNI using IPs: [192.168.16.65/32] ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.302294 containerd[1548]: 2024-09-05 14:30:51.294 [INFO][4568] dataplane_linux.go 68: Setting the host side veth name to cali39d77e5b83d ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.302294 containerd[1548]: 2024-09-05 14:30:51.295 [INFO][4568] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.302294 containerd[1548]: 2024-09-05 14:30:51.295 [INFO][4568] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f2341112-c29f-4ee3-9ea4-9958e7ec9922", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137", Pod:"coredns-5dd5756b68-c9mcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39d77e5b83d", MAC:"02:1a:60:6b:ea:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:51.302294 containerd[1548]: 2024-09-05 14:30:51.300 [INFO][4568] k8s.go 500: Wrote updated endpoint to datastore ContainerID="44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137" Namespace="kube-system" Pod="coredns-5dd5756b68-c9mcx" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:30:51.311756 containerd[1548]: time="2024-09-05T14:30:51.311676934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:51.311953 containerd[1548]: time="2024-09-05T14:30:51.311734715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:51.311991 containerd[1548]: time="2024-09-05T14:30:51.311952553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:51.312024 containerd[1548]: time="2024-09-05T14:30:51.312010620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:51.332797 systemd[1]: Started cri-containerd-44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137.scope - libcontainer container 44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137. Sep 5 14:30:51.411778 containerd[1548]: time="2024-09-05T14:30:51.411750977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c9mcx,Uid:f2341112-c29f-4ee3-9ea4-9958e7ec9922,Namespace:kube-system,Attempt:1,} returns sandbox id \"44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137\"" Sep 5 14:30:51.413411 containerd[1548]: time="2024-09-05T14:30:51.413392633Z" level=info msg="CreateContainer within sandbox \"44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 14:30:51.417945 containerd[1548]: time="2024-09-05T14:30:51.417902046Z" level=info msg="CreateContainer within sandbox \"44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca752e2cc45fa5fffea56d342278b422c7b9daa26fe31520ef6c334b1b479420\"" Sep 5 14:30:51.418082 containerd[1548]: time="2024-09-05T14:30:51.418070838Z" level=info msg="StartContainer for \"ca752e2cc45fa5fffea56d342278b422c7b9daa26fe31520ef6c334b1b479420\"" Sep 5 14:30:51.446875 systemd[1]: Started cri-containerd-ca752e2cc45fa5fffea56d342278b422c7b9daa26fe31520ef6c334b1b479420.scope - libcontainer container ca752e2cc45fa5fffea56d342278b422c7b9daa26fe31520ef6c334b1b479420. Sep 5 14:30:51.500368 containerd[1548]: time="2024-09-05T14:30:51.500328022Z" level=info msg="StartContainer for \"ca752e2cc45fa5fffea56d342278b422c7b9daa26fe31520ef6c334b1b479420\" returns successfully" Sep 5 14:30:52.343575 kubelet[2893]: I0905 14:30:52.343508 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c9mcx" podStartSLOduration=27.343409611 podCreationTimestamp="2024-09-05 14:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 14:30:52.343129821 +0000 UTC m=+40.205288530" watchObservedRunningTime="2024-09-05 14:30:52.343409611 +0000 UTC m=+40.205568305" Sep 5 14:30:53.128621 systemd-networkd[1338]: cali39d77e5b83d: Gained IPv6LL Sep 5 14:30:53.184102 containerd[1548]: time="2024-09-05T14:30:53.183998778Z" level=info msg="StopPodSandbox for \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\"" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.219 [INFO][4745] k8s.go 608: Cleaning up netns ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.219 [INFO][4745] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" iface="eth0" netns="/var/run/netns/cni-d1b54eab-8449-ad75-a7e4-9cfe97a1a648" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.219 [INFO][4745] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" iface="eth0" netns="/var/run/netns/cni-d1b54eab-8449-ad75-a7e4-9cfe97a1a648" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.220 [INFO][4745] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" iface="eth0" netns="/var/run/netns/cni-d1b54eab-8449-ad75-a7e4-9cfe97a1a648" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.220 [INFO][4745] k8s.go 615: Releasing IP address(es) ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.220 [INFO][4745] utils.go 188: Calico CNI releasing IP address ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.229 [INFO][4758] ipam_plugin.go 417: Releasing address using handleID ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.229 [INFO][4758] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.229 [INFO][4758] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.233 [WARNING][4758] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.233 [INFO][4758] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.234 [INFO][4758] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:53.235705 containerd[1548]: 2024-09-05 14:30:53.234 [INFO][4745] k8s.go 621: Teardown processing complete. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:30:53.236196 containerd[1548]: time="2024-09-05T14:30:53.235755934Z" level=info msg="TearDown network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\" successfully" Sep 5 14:30:53.236196 containerd[1548]: time="2024-09-05T14:30:53.235778698Z" level=info msg="StopPodSandbox for \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\" returns successfully" Sep 5 14:30:53.236196 containerd[1548]: time="2024-09-05T14:30:53.236150384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57555fbd67-8z6w4,Uid:f22fbe09-b2ac-45e8-af2d-e6cf0981b234,Namespace:calico-system,Attempt:1,}" Sep 5 14:30:53.237308 systemd[1]: run-netns-cni\x2dd1b54eab\x2d8449\x2dad75\x2da7e4\x2d9cfe97a1a648.mount: Deactivated successfully. Sep 5 14:30:53.287971 systemd-networkd[1338]: calibf9e171a432: Link UP Sep 5 14:30:53.288112 systemd-networkd[1338]: calibf9e171a432: Gained carrier Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.256 [INFO][4774] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0 calico-kube-controllers-57555fbd67- calico-system f22fbe09-b2ac-45e8-af2d-e6cf0981b234 715 0 2024-09-05 14:30:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57555fbd67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054.1.0-a-f4c57b7dbd calico-kube-controllers-57555fbd67-8z6w4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibf9e171a432 [] []}} ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.256 [INFO][4774] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.268 [INFO][4796] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" HandleID="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.273 [INFO][4796] ipam_plugin.go 270: Auto assigning IP ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" HandleID="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-f4c57b7dbd", "pod":"calico-kube-controllers-57555fbd67-8z6w4", "timestamp":"2024-09-05 14:30:53.268892806 +0000 UTC"}, Hostname:"ci-4054.1.0-a-f4c57b7dbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.273 [INFO][4796] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.273 [INFO][4796] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.273 [INFO][4796] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-f4c57b7dbd' Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.274 [INFO][4796] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.277 [INFO][4796] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.279 [INFO][4796] ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.280 [INFO][4796] ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.281 [INFO][4796] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.281 [INFO][4796] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.282 [INFO][4796] ipam.go 1685: Creating new handle: k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5 Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.283 [INFO][4796] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.286 [INFO][4796] ipam.go 1216: Successfully claimed IPs: [192.168.16.66/26] block=192.168.16.64/26 handle="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.286 [INFO][4796] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.66/26] handle="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.286 [INFO][4796] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:53.293489 containerd[1548]: 2024-09-05 14:30:53.286 [INFO][4796] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.16.66/26] IPv6=[] ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" HandleID="k8s-pod-network.f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.294047 containerd[1548]: 2024-09-05 14:30:53.287 [INFO][4774] k8s.go 386: Populated endpoint ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0", GenerateName:"calico-kube-controllers-57555fbd67-", Namespace:"calico-system", SelfLink:"", UID:"f22fbe09-b2ac-45e8-af2d-e6cf0981b234", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57555fbd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"", Pod:"calico-kube-controllers-57555fbd67-8z6w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf9e171a432", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:53.294047 containerd[1548]: 2024-09-05 14:30:53.287 [INFO][4774] k8s.go 387: Calico CNI using IPs: [192.168.16.66/32] ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.294047 containerd[1548]: 2024-09-05 14:30:53.287 [INFO][4774] dataplane_linux.go 68: Setting the host side veth name to calibf9e171a432 ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.294047 containerd[1548]: 2024-09-05 14:30:53.288 [INFO][4774] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.294047 containerd[1548]: 2024-09-05 14:30:53.288 [INFO][4774] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0", GenerateName:"calico-kube-controllers-57555fbd67-", Namespace:"calico-system", SelfLink:"", UID:"f22fbe09-b2ac-45e8-af2d-e6cf0981b234", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57555fbd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5", Pod:"calico-kube-controllers-57555fbd67-8z6w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf9e171a432", MAC:"8a:c7:54:c7:64:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:53.294047 containerd[1548]: 2024-09-05 14:30:53.292 [INFO][4774] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5" Namespace="calico-system" Pod="calico-kube-controllers-57555fbd67-8z6w4" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:30:53.302671 containerd[1548]: time="2024-09-05T14:30:53.302341792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:53.302671 containerd[1548]: time="2024-09-05T14:30:53.302575678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:53.302671 containerd[1548]: time="2024-09-05T14:30:53.302583367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:53.302671 containerd[1548]: time="2024-09-05T14:30:53.302626588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:53.325615 systemd[1]: Started cri-containerd-f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5.scope - libcontainer container f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5. Sep 5 14:30:53.348926 containerd[1548]: time="2024-09-05T14:30:53.348875086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57555fbd67-8z6w4,Uid:f22fbe09-b2ac-45e8-af2d-e6cf0981b234,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5\"" Sep 5 14:30:53.349599 containerd[1548]: time="2024-09-05T14:30:53.349587459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 5 14:30:54.183920 containerd[1548]: time="2024-09-05T14:30:54.183889195Z" level=info msg="StopPodSandbox for \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\"" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.208 [INFO][4886] k8s.go 608: Cleaning up netns ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.208 [INFO][4886] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" iface="eth0" netns="/var/run/netns/cni-e9e96bf5-d721-ae08-6b5a-4e3243c5ec9c" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.208 [INFO][4886] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" iface="eth0" netns="/var/run/netns/cni-e9e96bf5-d721-ae08-6b5a-4e3243c5ec9c" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.208 [INFO][4886] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" iface="eth0" netns="/var/run/netns/cni-e9e96bf5-d721-ae08-6b5a-4e3243c5ec9c" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.208 [INFO][4886] k8s.go 615: Releasing IP address(es) ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.208 [INFO][4886] utils.go 188: Calico CNI releasing IP address ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.222 [INFO][4901] ipam_plugin.go 417: Releasing address using handleID ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.223 [INFO][4901] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.223 [INFO][4901] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.228 [WARNING][4901] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.228 [INFO][4901] ipam_plugin.go 445: Releasing address using workloadID ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.229 [INFO][4901] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:54.231295 containerd[1548]: 2024-09-05 14:30:54.230 [INFO][4886] k8s.go 621: Teardown processing complete. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:30:54.231888 containerd[1548]: time="2024-09-05T14:30:54.231402286Z" level=info msg="TearDown network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\" successfully" Sep 5 14:30:54.231888 containerd[1548]: time="2024-09-05T14:30:54.231426472Z" level=info msg="StopPodSandbox for \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\" returns successfully" Sep 5 14:30:54.231954 containerd[1548]: time="2024-09-05T14:30:54.231934242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7gtn,Uid:7663138d-98ce-474d-8150-2dfcb2cc2b36,Namespace:kube-system,Attempt:1,}" Sep 5 14:30:54.233258 systemd[1]: run-netns-cni\x2de9e96bf5\x2dd721\x2dae08\x2d6b5a\x2d4e3243c5ec9c.mount: Deactivated successfully. Sep 5 14:30:54.289486 systemd-networkd[1338]: calia5cb7eb552c: Link UP Sep 5 14:30:54.289610 systemd-networkd[1338]: calia5cb7eb552c: Gained carrier Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.251 [INFO][4916] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0 coredns-5dd5756b68- kube-system 7663138d-98ce-474d-8150-2dfcb2cc2b36 723 0 2024-09-05 14:30:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054.1.0-a-f4c57b7dbd coredns-5dd5756b68-q7gtn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5cb7eb552c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.251 [INFO][4916] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.265 [INFO][4937] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" HandleID="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.270 [INFO][4937] ipam_plugin.go 270: Auto assigning IP ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" HandleID="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054.1.0-a-f4c57b7dbd", "pod":"coredns-5dd5756b68-q7gtn", "timestamp":"2024-09-05 14:30:54.265342135 +0000 UTC"}, Hostname:"ci-4054.1.0-a-f4c57b7dbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.270 [INFO][4937] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.270 [INFO][4937] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.270 [INFO][4937] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-f4c57b7dbd' Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.271 [INFO][4937] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.273 [INFO][4937] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.274 [INFO][4937] ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.275 [INFO][4937] ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.277 [INFO][4937] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.277 [INFO][4937] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.277 [INFO][4937] ipam.go 1685: Creating new handle: k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5 Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.279 [INFO][4937] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.283 [INFO][4937] ipam.go 1216: Successfully claimed IPs: [192.168.16.67/26] block=192.168.16.64/26 handle="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.283 [INFO][4937] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.67/26] handle="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.283 [INFO][4937] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:54.296017 containerd[1548]: 2024-09-05 14:30:54.283 [INFO][4937] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.16.67/26] IPv6=[] ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" HandleID="k8s-pod-network.094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.296430 containerd[1548]: 2024-09-05 14:30:54.288 [INFO][4916] k8s.go 386: Populated endpoint ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7663138d-98ce-474d-8150-2dfcb2cc2b36", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"", Pod:"coredns-5dd5756b68-q7gtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5cb7eb552c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:54.296430 containerd[1548]: 2024-09-05 14:30:54.288 [INFO][4916] k8s.go 387: Calico CNI using IPs: [192.168.16.67/32] ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.296430 containerd[1548]: 2024-09-05 14:30:54.288 [INFO][4916] dataplane_linux.go 68: Setting the host side veth name to calia5cb7eb552c ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.296430 containerd[1548]: 2024-09-05 14:30:54.289 [INFO][4916] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.296430 containerd[1548]: 2024-09-05 14:30:54.289 [INFO][4916] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7663138d-98ce-474d-8150-2dfcb2cc2b36", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5", Pod:"coredns-5dd5756b68-q7gtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5cb7eb552c", MAC:"6a:97:37:2e:c5:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:54.296430 containerd[1548]: 2024-09-05 14:30:54.294 [INFO][4916] k8s.go 500: Wrote updated endpoint to datastore ContainerID="094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5" Namespace="kube-system" Pod="coredns-5dd5756b68-q7gtn" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:30:54.305508 containerd[1548]: time="2024-09-05T14:30:54.305266793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:54.305562 containerd[1548]: time="2024-09-05T14:30:54.305496463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:54.305562 containerd[1548]: time="2024-09-05T14:30:54.305525647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:54.305595 containerd[1548]: time="2024-09-05T14:30:54.305578495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:54.333425 systemd[1]: Started cri-containerd-094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5.scope - libcontainer container 094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5. Sep 5 14:30:54.358870 containerd[1548]: time="2024-09-05T14:30:54.358818276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7gtn,Uid:7663138d-98ce-474d-8150-2dfcb2cc2b36,Namespace:kube-system,Attempt:1,} returns sandbox id \"094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5\"" Sep 5 14:30:54.360318 containerd[1548]: time="2024-09-05T14:30:54.360301245Z" level=info msg="CreateContainer within sandbox \"094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 14:30:54.365026 containerd[1548]: time="2024-09-05T14:30:54.365012081Z" level=info msg="CreateContainer within sandbox \"094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3708e4dc6f401a061f6b95107925a0991b37f4f0d9204c90e5c96aeaa2033b7d\"" Sep 5 14:30:54.365261 containerd[1548]: time="2024-09-05T14:30:54.365249371Z" level=info msg="StartContainer for \"3708e4dc6f401a061f6b95107925a0991b37f4f0d9204c90e5c96aeaa2033b7d\"" Sep 5 14:30:54.391450 systemd[1]: Started cri-containerd-3708e4dc6f401a061f6b95107925a0991b37f4f0d9204c90e5c96aeaa2033b7d.scope - libcontainer container 3708e4dc6f401a061f6b95107925a0991b37f4f0d9204c90e5c96aeaa2033b7d. Sep 5 14:30:54.402918 containerd[1548]: time="2024-09-05T14:30:54.402872203Z" level=info msg="StartContainer for \"3708e4dc6f401a061f6b95107925a0991b37f4f0d9204c90e5c96aeaa2033b7d\" returns successfully" Sep 5 14:30:54.728630 systemd-networkd[1338]: calibf9e171a432: Gained IPv6LL Sep 5 14:30:55.330118 kubelet[2893]: I0905 14:30:55.330101 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q7gtn" podStartSLOduration=30.330077941 podCreationTimestamp="2024-09-05 14:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-05 14:30:55.329743244 +0000 UTC m=+43.191901883" watchObservedRunningTime="2024-09-05 14:30:55.330077941 +0000 UTC m=+43.192236578" Sep 5 14:30:55.524300 containerd[1548]: time="2024-09-05T14:30:55.524246353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:55.524495 containerd[1548]: time="2024-09-05T14:30:55.524461033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 5 14:30:55.524742 containerd[1548]: time="2024-09-05T14:30:55.524703335Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:55.525948 containerd[1548]: time="2024-09-05T14:30:55.525903591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:55.526201 containerd[1548]: time="2024-09-05T14:30:55.526163924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.176559303s" Sep 5 14:30:55.526201 containerd[1548]: time="2024-09-05T14:30:55.526179858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 5 14:30:55.529384 containerd[1548]: time="2024-09-05T14:30:55.529337920Z" level=info msg="CreateContainer within sandbox \"f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 14:30:55.533383 containerd[1548]: time="2024-09-05T14:30:55.533335036Z" level=info msg="CreateContainer within sandbox \"f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e8f4ffa019ddb065fe14987e7357b85d42a3dd9f80840364526648233ffb6770\"" Sep 5 14:30:55.533597 containerd[1548]: time="2024-09-05T14:30:55.533550734Z" level=info msg="StartContainer for \"e8f4ffa019ddb065fe14987e7357b85d42a3dd9f80840364526648233ffb6770\"" Sep 5 14:30:55.554457 systemd[1]: Started cri-containerd-e8f4ffa019ddb065fe14987e7357b85d42a3dd9f80840364526648233ffb6770.scope - libcontainer container e8f4ffa019ddb065fe14987e7357b85d42a3dd9f80840364526648233ffb6770. Sep 5 14:30:55.577338 containerd[1548]: time="2024-09-05T14:30:55.577316721Z" level=info msg="StartContainer for \"e8f4ffa019ddb065fe14987e7357b85d42a3dd9f80840364526648233ffb6770\" returns successfully" Sep 5 14:30:56.009598 systemd-networkd[1338]: calia5cb7eb552c: Gained IPv6LL Sep 5 14:30:56.185248 containerd[1548]: time="2024-09-05T14:30:56.185164451Z" level=info msg="StopPodSandbox for \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\"" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.256 [INFO][5131] k8s.go 608: Cleaning up netns ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.256 [INFO][5131] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" iface="eth0" netns="/var/run/netns/cni-283f4023-34f3-8a31-0370-932a87701152" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.257 [INFO][5131] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" iface="eth0" netns="/var/run/netns/cni-283f4023-34f3-8a31-0370-932a87701152" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.257 [INFO][5131] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" iface="eth0" netns="/var/run/netns/cni-283f4023-34f3-8a31-0370-932a87701152" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.257 [INFO][5131] k8s.go 615: Releasing IP address(es) ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.257 [INFO][5131] utils.go 188: Calico CNI releasing IP address ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.273 [INFO][5149] ipam_plugin.go 417: Releasing address using handleID ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.273 [INFO][5149] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.273 [INFO][5149] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.278 [WARNING][5149] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.278 [INFO][5149] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.279 [INFO][5149] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:56.281704 containerd[1548]: 2024-09-05 14:30:56.280 [INFO][5131] k8s.go 621: Teardown processing complete. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:30:56.282094 containerd[1548]: time="2024-09-05T14:30:56.281767949Z" level=info msg="TearDown network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\" successfully" Sep 5 14:30:56.282094 containerd[1548]: time="2024-09-05T14:30:56.281801768Z" level=info msg="StopPodSandbox for \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\" returns successfully" Sep 5 14:30:56.282426 containerd[1548]: time="2024-09-05T14:30:56.282378129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qv5zz,Uid:06f180ad-ca06-48f1-b9ab-6c62014854a5,Namespace:calico-system,Attempt:1,}" Sep 5 14:30:56.305760 systemd[1]: run-netns-cni\x2d283f4023\x2d34f3\x2d8a31\x2d0370\x2d932a87701152.mount: Deactivated successfully. Sep 5 14:30:56.330356 kubelet[2893]: I0905 14:30:56.330328 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57555fbd67-8z6w4" podStartSLOduration=24.153433702 podCreationTimestamp="2024-09-05 14:30:30 +0000 UTC" firstStartedPulling="2024-09-05 14:30:53.349439662 +0000 UTC m=+41.211598302" lastFinishedPulling="2024-09-05 14:30:55.526297826 +0000 UTC m=+43.388456465" observedRunningTime="2024-09-05 14:30:56.330193861 +0000 UTC m=+44.192352500" watchObservedRunningTime="2024-09-05 14:30:56.330291865 +0000 UTC m=+44.192450503" Sep 5 14:30:56.332821 systemd-networkd[1338]: cali6645ca40f4c: Link UP Sep 5 14:30:56.332961 systemd-networkd[1338]: cali6645ca40f4c: Gained carrier Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.301 [INFO][5166] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0 csi-node-driver- calico-system 06f180ad-ca06-48f1-b9ab-6c62014854a5 750 0 2024-09-05 14:30:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054.1.0-a-f4c57b7dbd csi-node-driver-qv5zz eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali6645ca40f4c [] []}} ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.301 [INFO][5166] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.314 [INFO][5186] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" HandleID="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.319 [INFO][5186] ipam_plugin.go 270: Auto assigning IP ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" HandleID="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000219220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054.1.0-a-f4c57b7dbd", "pod":"csi-node-driver-qv5zz", "timestamp":"2024-09-05 14:30:56.31473513 +0000 UTC"}, Hostname:"ci-4054.1.0-a-f4c57b7dbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.319 [INFO][5186] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.319 [INFO][5186] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.319 [INFO][5186] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-f4c57b7dbd' Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.320 [INFO][5186] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.322 [INFO][5186] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.324 [INFO][5186] ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.325 [INFO][5186] ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.326 [INFO][5186] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.326 [INFO][5186] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.327 [INFO][5186] ipam.go 1685: Creating new handle: k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.328 [INFO][5186] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.331 [INFO][5186] ipam.go 1216: Successfully claimed IPs: [192.168.16.68/26] block=192.168.16.64/26 handle="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.331 [INFO][5186] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.68/26] handle="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.331 [INFO][5186] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:30:56.337380 containerd[1548]: 2024-09-05 14:30:56.331 [INFO][5186] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.16.68/26] IPv6=[] ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" HandleID="k8s-pod-network.bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.337766 containerd[1548]: 2024-09-05 14:30:56.331 [INFO][5166] k8s.go 386: Populated endpoint ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06f180ad-ca06-48f1-b9ab-6c62014854a5", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"", Pod:"csi-node-driver-qv5zz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6645ca40f4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:56.337766 containerd[1548]: 2024-09-05 14:30:56.332 [INFO][5166] k8s.go 387: Calico CNI using IPs: [192.168.16.68/32] ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.337766 containerd[1548]: 2024-09-05 14:30:56.332 [INFO][5166] dataplane_linux.go 68: Setting the host side veth name to cali6645ca40f4c ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.337766 containerd[1548]: 2024-09-05 14:30:56.332 [INFO][5166] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.337766 containerd[1548]: 2024-09-05 14:30:56.333 [INFO][5166] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06f180ad-ca06-48f1-b9ab-6c62014854a5", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b", Pod:"csi-node-driver-qv5zz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6645ca40f4c", MAC:"3e:fd:98:f3:dd:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:30:56.337766 containerd[1548]: 2024-09-05 14:30:56.336 [INFO][5166] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b" Namespace="calico-system" Pod="csi-node-driver-qv5zz" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:30:56.346486 containerd[1548]: time="2024-09-05T14:30:56.346412093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:30:56.346486 containerd[1548]: time="2024-09-05T14:30:56.346442070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:30:56.346486 containerd[1548]: time="2024-09-05T14:30:56.346448957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:56.346595 containerd[1548]: time="2024-09-05T14:30:56.346520331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:30:56.370571 systemd[1]: Started cri-containerd-bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b.scope - libcontainer container bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b. Sep 5 14:30:56.382425 containerd[1548]: time="2024-09-05T14:30:56.382372286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qv5zz,Uid:06f180ad-ca06-48f1-b9ab-6c62014854a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b\"" Sep 5 14:30:56.383247 containerd[1548]: time="2024-09-05T14:30:56.383197851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 5 14:30:57.329226 kubelet[2893]: I0905 14:30:57.329209 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 14:30:57.672594 systemd-networkd[1338]: cali6645ca40f4c: Gained IPv6LL Sep 5 14:30:58.247803 containerd[1548]: time="2024-09-05T14:30:58.247776391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:58.248048 containerd[1548]: time="2024-09-05T14:30:58.248023744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 5 14:30:58.248383 containerd[1548]: time="2024-09-05T14:30:58.248371102Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:58.249243 containerd[1548]: time="2024-09-05T14:30:58.249232729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:30:58.249666 containerd[1548]: time="2024-09-05T14:30:58.249651975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.866430905s" Sep 5 14:30:58.249691 containerd[1548]: time="2024-09-05T14:30:58.249669309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 5 14:30:58.250578 containerd[1548]: time="2024-09-05T14:30:58.250537790Z" level=info msg="CreateContainer within sandbox \"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 14:30:58.255820 containerd[1548]: time="2024-09-05T14:30:58.255799967Z" level=info msg="CreateContainer within sandbox \"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b9a96d3416b9965e01385ce601c88c4269977350a3a5da0352b9eec23937b3e5\"" Sep 5 14:30:58.256054 containerd[1548]: time="2024-09-05T14:30:58.256039645Z" level=info msg="StartContainer for \"b9a96d3416b9965e01385ce601c88c4269977350a3a5da0352b9eec23937b3e5\"" Sep 5 14:30:58.279560 systemd[1]: Started cri-containerd-b9a96d3416b9965e01385ce601c88c4269977350a3a5da0352b9eec23937b3e5.scope - libcontainer container b9a96d3416b9965e01385ce601c88c4269977350a3a5da0352b9eec23937b3e5. Sep 5 14:30:58.292218 containerd[1548]: time="2024-09-05T14:30:58.292196904Z" level=info msg="StartContainer for \"b9a96d3416b9965e01385ce601c88c4269977350a3a5da0352b9eec23937b3e5\" returns successfully" Sep 5 14:30:58.292843 containerd[1548]: time="2024-09-05T14:30:58.292805418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 5 14:31:00.005826 containerd[1548]: time="2024-09-05T14:31:00.005772673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:31:00.006040 containerd[1548]: time="2024-09-05T14:31:00.005996157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 5 14:31:00.006387 containerd[1548]: time="2024-09-05T14:31:00.006306897Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:31:00.007291 containerd[1548]: time="2024-09-05T14:31:00.007242446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:31:00.007739 containerd[1548]: time="2024-09-05T14:31:00.007695888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.714870571s" Sep 5 14:31:00.007739 containerd[1548]: time="2024-09-05T14:31:00.007714964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 5 14:31:00.008742 containerd[1548]: time="2024-09-05T14:31:00.008728528Z" level=info msg="CreateContainer within sandbox \"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 14:31:00.014136 containerd[1548]: time="2024-09-05T14:31:00.014115820Z" level=info msg="CreateContainer within sandbox \"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5f7cc325439597933088521b6cd0c0da88c00621d0a3a6fda1c70d8ce7d55689\"" Sep 5 14:31:00.014385 containerd[1548]: time="2024-09-05T14:31:00.014374340Z" level=info msg="StartContainer for \"5f7cc325439597933088521b6cd0c0da88c00621d0a3a6fda1c70d8ce7d55689\"" Sep 5 14:31:00.044593 systemd[1]: Started cri-containerd-5f7cc325439597933088521b6cd0c0da88c00621d0a3a6fda1c70d8ce7d55689.scope - libcontainer container 5f7cc325439597933088521b6cd0c0da88c00621d0a3a6fda1c70d8ce7d55689. Sep 5 14:31:00.075493 containerd[1548]: time="2024-09-05T14:31:00.075457908Z" level=info msg="StartContainer for \"5f7cc325439597933088521b6cd0c0da88c00621d0a3a6fda1c70d8ce7d55689\" returns successfully" Sep 5 14:31:00.234027 kubelet[2893]: I0905 14:31:00.233937 2893 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 14:31:00.234908 kubelet[2893]: I0905 14:31:00.234045 2893 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 14:31:00.364037 kubelet[2893]: I0905 14:31:00.363837 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-qv5zz" podStartSLOduration=26.738831038 podCreationTimestamp="2024-09-05 14:30:30 +0000 UTC" firstStartedPulling="2024-09-05 14:30:56.383038614 +0000 UTC m=+44.245197257" lastFinishedPulling="2024-09-05 14:31:00.007907766 +0000 UTC m=+47.870066405" observedRunningTime="2024-09-05 14:31:00.362160694 +0000 UTC m=+48.224319433" watchObservedRunningTime="2024-09-05 14:31:00.363700186 +0000 UTC m=+48.225858879" Sep 5 14:31:12.182056 containerd[1548]: time="2024-09-05T14:31:12.181840375Z" level=info msg="StopPodSandbox for \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\"" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.246 [WARNING][5445] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06f180ad-ca06-48f1-b9ab-6c62014854a5", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b", Pod:"csi-node-driver-qv5zz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6645ca40f4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.246 [INFO][5445] k8s.go 608: Cleaning up netns ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.246 [INFO][5445] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" iface="eth0" netns="" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.246 [INFO][5445] k8s.go 615: Releasing IP address(es) ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.246 [INFO][5445] utils.go 188: Calico CNI releasing IP address ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.266 [INFO][5464] ipam_plugin.go 417: Releasing address using handleID ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.267 [INFO][5464] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.267 [INFO][5464] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.271 [WARNING][5464] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.272 [INFO][5464] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.273 [INFO][5464] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.275230 containerd[1548]: 2024-09-05 14:31:12.274 [INFO][5445] k8s.go 621: Teardown processing complete. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.275230 containerd[1548]: time="2024-09-05T14:31:12.275227671Z" level=info msg="TearDown network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\" successfully" Sep 5 14:31:12.275713 containerd[1548]: time="2024-09-05T14:31:12.275248329Z" level=info msg="StopPodSandbox for \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\" returns successfully" Sep 5 14:31:12.275752 containerd[1548]: time="2024-09-05T14:31:12.275733823Z" level=info msg="RemovePodSandbox for \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\"" Sep 5 14:31:12.275782 containerd[1548]: time="2024-09-05T14:31:12.275761960Z" level=info msg="Forcibly stopping sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\"" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.302 [WARNING][5492] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06f180ad-ca06-48f1-b9ab-6c62014854a5", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"bae1334f73a55556b23a2dd599a7542a64506ca85ce5776d32cc8ff7c5e9b35b", Pod:"csi-node-driver-qv5zz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali6645ca40f4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.302 [INFO][5492] k8s.go 608: Cleaning up netns ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.302 [INFO][5492] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" iface="eth0" netns="" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.302 [INFO][5492] k8s.go 615: Releasing IP address(es) ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.302 [INFO][5492] utils.go 188: Calico CNI releasing IP address ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.318 [INFO][5509] ipam_plugin.go 417: Releasing address using handleID ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.318 [INFO][5509] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.318 [INFO][5509] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.324 [WARNING][5509] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.324 [INFO][5509] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" HandleID="k8s-pod-network.6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-csi--node--driver--qv5zz-eth0" Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.325 [INFO][5509] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.327390 containerd[1548]: 2024-09-05 14:31:12.326 [INFO][5492] k8s.go 621: Teardown processing complete. ContainerID="6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253" Sep 5 14:31:12.327878 containerd[1548]: time="2024-09-05T14:31:12.327394143Z" level=info msg="TearDown network for sandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\" successfully" Sep 5 14:31:12.329106 containerd[1548]: time="2024-09-05T14:31:12.329092974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 14:31:12.329144 containerd[1548]: time="2024-09-05T14:31:12.329134041Z" level=info msg="RemovePodSandbox \"6b8c2a82e5d55a8ada1e65867e565913fa69860ed031cf891557a2b435797253\" returns successfully" Sep 5 14:31:12.329360 containerd[1548]: time="2024-09-05T14:31:12.329348962Z" level=info msg="StopPodSandbox for \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\"" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.347 [WARNING][5541] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7663138d-98ce-474d-8150-2dfcb2cc2b36", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5", Pod:"coredns-5dd5756b68-q7gtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5cb7eb552c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.347 [INFO][5541] k8s.go 608: Cleaning up netns ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.347 [INFO][5541] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" iface="eth0" netns="" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.347 [INFO][5541] k8s.go 615: Releasing IP address(es) ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.347 [INFO][5541] utils.go 188: Calico CNI releasing IP address ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.357 [INFO][5558] ipam_plugin.go 417: Releasing address using handleID ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.357 [INFO][5558] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.357 [INFO][5558] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.360 [WARNING][5558] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.361 [INFO][5558] ipam_plugin.go 445: Releasing address using workloadID ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.362 [INFO][5558] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.363416 containerd[1548]: 2024-09-05 14:31:12.362 [INFO][5541] k8s.go 621: Teardown processing complete. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.363416 containerd[1548]: time="2024-09-05T14:31:12.363405087Z" level=info msg="TearDown network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\" successfully" Sep 5 14:31:12.363416 containerd[1548]: time="2024-09-05T14:31:12.363420152Z" level=info msg="StopPodSandbox for \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\" returns successfully" Sep 5 14:31:12.363745 containerd[1548]: time="2024-09-05T14:31:12.363689963Z" level=info msg="RemovePodSandbox for \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\"" Sep 5 14:31:12.363745 containerd[1548]: time="2024-09-05T14:31:12.363708806Z" level=info msg="Forcibly stopping sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\"" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.381 [WARNING][5586] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7663138d-98ce-474d-8150-2dfcb2cc2b36", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"094b4ea2a5d2c0a5bf9e388651fc7dfa78964811a963f7bad63f48091309a2f5", Pod:"coredns-5dd5756b68-q7gtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5cb7eb552c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.381 [INFO][5586] k8s.go 608: Cleaning up netns ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.381 [INFO][5586] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" iface="eth0" netns="" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.381 [INFO][5586] k8s.go 615: Releasing IP address(es) ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.381 [INFO][5586] utils.go 188: Calico CNI releasing IP address ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.391 [INFO][5599] ipam_plugin.go 417: Releasing address using handleID ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.391 [INFO][5599] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.391 [INFO][5599] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.395 [WARNING][5599] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.395 [INFO][5599] ipam_plugin.go 445: Releasing address using workloadID ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" HandleID="k8s-pod-network.916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--q7gtn-eth0" Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.396 [INFO][5599] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.397606 containerd[1548]: 2024-09-05 14:31:12.396 [INFO][5586] k8s.go 621: Teardown processing complete. ContainerID="916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce" Sep 5 14:31:12.397606 containerd[1548]: time="2024-09-05T14:31:12.397593963Z" level=info msg="TearDown network for sandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\" successfully" Sep 5 14:31:12.398812 containerd[1548]: time="2024-09-05T14:31:12.398799713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 14:31:12.398844 containerd[1548]: time="2024-09-05T14:31:12.398826013Z" level=info msg="RemovePodSandbox \"916ffd558970a1e81557bd1e3549b87f91beeef5797eed95e3602c00955ed1ce\" returns successfully" Sep 5 14:31:12.399071 containerd[1548]: time="2024-09-05T14:31:12.399059777Z" level=info msg="StopPodSandbox for \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\"" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.417 [WARNING][5627] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0", GenerateName:"calico-kube-controllers-57555fbd67-", Namespace:"calico-system", SelfLink:"", UID:"f22fbe09-b2ac-45e8-af2d-e6cf0981b234", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57555fbd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5", Pod:"calico-kube-controllers-57555fbd67-8z6w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf9e171a432", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.417 [INFO][5627] k8s.go 608: Cleaning up netns ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.417 [INFO][5627] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" iface="eth0" netns="" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.417 [INFO][5627] k8s.go 615: Releasing IP address(es) ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.417 [INFO][5627] utils.go 188: Calico CNI releasing IP address ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.428 [INFO][5641] ipam_plugin.go 417: Releasing address using handleID ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.428 [INFO][5641] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.428 [INFO][5641] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.432 [WARNING][5641] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.432 [INFO][5641] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.433 [INFO][5641] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.434775 containerd[1548]: 2024-09-05 14:31:12.434 [INFO][5627] k8s.go 621: Teardown processing complete. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.434775 containerd[1548]: time="2024-09-05T14:31:12.434724764Z" level=info msg="TearDown network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\" successfully" Sep 5 14:31:12.434775 containerd[1548]: time="2024-09-05T14:31:12.434741733Z" level=info msg="StopPodSandbox for \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\" returns successfully" Sep 5 14:31:12.435324 containerd[1548]: time="2024-09-05T14:31:12.435014414Z" level=info msg="RemovePodSandbox for \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\"" Sep 5 14:31:12.435324 containerd[1548]: time="2024-09-05T14:31:12.435031887Z" level=info msg="Forcibly stopping sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\"" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.453 [WARNING][5666] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0", GenerateName:"calico-kube-controllers-57555fbd67-", Namespace:"calico-system", SelfLink:"", UID:"f22fbe09-b2ac-45e8-af2d-e6cf0981b234", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57555fbd67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"f7b434e2b722bb1914c1d0fd66be04e25851e6b623af37252b949f3796ae67d5", Pod:"calico-kube-controllers-57555fbd67-8z6w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf9e171a432", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.454 [INFO][5666] k8s.go 608: Cleaning up netns ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.454 [INFO][5666] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" iface="eth0" netns="" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.454 [INFO][5666] k8s.go 615: Releasing IP address(es) ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.454 [INFO][5666] utils.go 188: Calico CNI releasing IP address ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.464 [INFO][5682] ipam_plugin.go 417: Releasing address using handleID ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.464 [INFO][5682] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.464 [INFO][5682] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.468 [WARNING][5682] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.468 [INFO][5682] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" HandleID="k8s-pod-network.5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--kube--controllers--57555fbd67--8z6w4-eth0" Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.469 [INFO][5682] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.470577 containerd[1548]: 2024-09-05 14:31:12.469 [INFO][5666] k8s.go 621: Teardown processing complete. ContainerID="5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8" Sep 5 14:31:12.470888 containerd[1548]: time="2024-09-05T14:31:12.470601874Z" level=info msg="TearDown network for sandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\" successfully" Sep 5 14:31:12.474090 containerd[1548]: time="2024-09-05T14:31:12.474047961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 14:31:12.474090 containerd[1548]: time="2024-09-05T14:31:12.474075702Z" level=info msg="RemovePodSandbox \"5f173ea182c6541723d6d8a2ed2f6037b3a1e96682ee16f77cfb2d70304a29d8\" returns successfully" Sep 5 14:31:12.474382 containerd[1548]: time="2024-09-05T14:31:12.474370343Z" level=info msg="StopPodSandbox for \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\"" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.493 [WARNING][5711] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f2341112-c29f-4ee3-9ea4-9958e7ec9922", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137", Pod:"coredns-5dd5756b68-c9mcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39d77e5b83d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.493 [INFO][5711] k8s.go 608: Cleaning up netns ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.493 [INFO][5711] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" iface="eth0" netns="" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.493 [INFO][5711] k8s.go 615: Releasing IP address(es) ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.493 [INFO][5711] utils.go 188: Calico CNI releasing IP address ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.503 [INFO][5725] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.503 [INFO][5725] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.503 [INFO][5725] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.506 [WARNING][5725] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.507 [INFO][5725] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.508 [INFO][5725] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.509419 containerd[1548]: 2024-09-05 14:31:12.508 [INFO][5711] k8s.go 621: Teardown processing complete. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.509419 containerd[1548]: time="2024-09-05T14:31:12.509415415Z" level=info msg="TearDown network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\" successfully" Sep 5 14:31:12.509739 containerd[1548]: time="2024-09-05T14:31:12.509431222Z" level=info msg="StopPodSandbox for \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\" returns successfully" Sep 5 14:31:12.509739 containerd[1548]: time="2024-09-05T14:31:12.509701685Z" level=info msg="RemovePodSandbox for \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\"" Sep 5 14:31:12.509739 containerd[1548]: time="2024-09-05T14:31:12.509716196Z" level=info msg="Forcibly stopping sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\"" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.527 [WARNING][5754] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f2341112-c29f-4ee3-9ea4-9958e7ec9922", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"44784d0f89c848efe1d7e999bb96d97eea81cd37c92a751866edaa567c642137", Pod:"coredns-5dd5756b68-c9mcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39d77e5b83d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.527 [INFO][5754] k8s.go 608: Cleaning up netns ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.527 [INFO][5754] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" iface="eth0" netns="" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.527 [INFO][5754] k8s.go 615: Releasing IP address(es) ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.527 [INFO][5754] utils.go 188: Calico CNI releasing IP address ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.538 [INFO][5767] ipam_plugin.go 417: Releasing address using handleID ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.538 [INFO][5767] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.538 [INFO][5767] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.542 [WARNING][5767] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.543 [INFO][5767] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" HandleID="k8s-pod-network.4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-coredns--5dd5756b68--c9mcx-eth0" Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.544 [INFO][5767] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:12.545628 containerd[1548]: 2024-09-05 14:31:12.544 [INFO][5754] k8s.go 621: Teardown processing complete. ContainerID="4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd" Sep 5 14:31:12.545975 containerd[1548]: time="2024-09-05T14:31:12.545627493Z" level=info msg="TearDown network for sandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\" successfully" Sep 5 14:31:12.547092 containerd[1548]: time="2024-09-05T14:31:12.547051057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 14:31:12.547092 containerd[1548]: time="2024-09-05T14:31:12.547083995Z" level=info msg="RemovePodSandbox \"4d9de12b58c2b4ec3122b86f80ead5c66a69463eaa08fed9b7c725833d8dd8dd\" returns successfully" Sep 5 14:31:19.765150 kubelet[2893]: I0905 14:31:19.765115 2893 topology_manager.go:215] "Topology Admit Handler" podUID="9a12ad50-ef19-4461-9d3a-7a61a30bc016" podNamespace="calico-apiserver" podName="calico-apiserver-7bb94bdcff-vvwg8" Sep 5 14:31:19.768234 kubelet[2893]: I0905 14:31:19.768213 2893 topology_manager.go:215] "Topology Admit Handler" podUID="9cc75bf4-5811-404b-98a1-3a2f0eb72f87" podNamespace="calico-apiserver" podName="calico-apiserver-7bb94bdcff-glkh5" Sep 5 14:31:19.770959 systemd[1]: Created slice kubepods-besteffort-pod9a12ad50_ef19_4461_9d3a_7a61a30bc016.slice - libcontainer container kubepods-besteffort-pod9a12ad50_ef19_4461_9d3a_7a61a30bc016.slice. Sep 5 14:31:19.774847 systemd[1]: Created slice kubepods-besteffort-pod9cc75bf4_5811_404b_98a1_3a2f0eb72f87.slice - libcontainer container kubepods-besteffort-pod9cc75bf4_5811_404b_98a1_3a2f0eb72f87.slice. Sep 5 14:31:19.869329 kubelet[2893]: I0905 14:31:19.869211 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6t75\" (UniqueName: \"kubernetes.io/projected/9a12ad50-ef19-4461-9d3a-7a61a30bc016-kube-api-access-n6t75\") pod \"calico-apiserver-7bb94bdcff-vvwg8\" (UID: \"9a12ad50-ef19-4461-9d3a-7a61a30bc016\") " pod="calico-apiserver/calico-apiserver-7bb94bdcff-vvwg8" Sep 5 14:31:19.869590 kubelet[2893]: I0905 14:31:19.869374 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a12ad50-ef19-4461-9d3a-7a61a30bc016-calico-apiserver-certs\") pod \"calico-apiserver-7bb94bdcff-vvwg8\" (UID: \"9a12ad50-ef19-4461-9d3a-7a61a30bc016\") " pod="calico-apiserver/calico-apiserver-7bb94bdcff-vvwg8" Sep 5 14:31:19.970139 kubelet[2893]: I0905 14:31:19.970077 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgkx7\" (UniqueName: \"kubernetes.io/projected/9cc75bf4-5811-404b-98a1-3a2f0eb72f87-kube-api-access-zgkx7\") pod \"calico-apiserver-7bb94bdcff-glkh5\" (UID: \"9cc75bf4-5811-404b-98a1-3a2f0eb72f87\") " pod="calico-apiserver/calico-apiserver-7bb94bdcff-glkh5" Sep 5 14:31:19.970605 kubelet[2893]: I0905 14:31:19.970548 2893 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9cc75bf4-5811-404b-98a1-3a2f0eb72f87-calico-apiserver-certs\") pod \"calico-apiserver-7bb94bdcff-glkh5\" (UID: \"9cc75bf4-5811-404b-98a1-3a2f0eb72f87\") " pod="calico-apiserver/calico-apiserver-7bb94bdcff-glkh5" Sep 5 14:31:19.970868 kubelet[2893]: E0905 14:31:19.970800 2893 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 5 14:31:19.971107 kubelet[2893]: E0905 14:31:19.971065 2893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a12ad50-ef19-4461-9d3a-7a61a30bc016-calico-apiserver-certs podName:9a12ad50-ef19-4461-9d3a-7a61a30bc016 nodeName:}" failed. No retries permitted until 2024-09-05 14:31:20.470954204 +0000 UTC m=+68.333112928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9a12ad50-ef19-4461-9d3a-7a61a30bc016-calico-apiserver-certs") pod "calico-apiserver-7bb94bdcff-vvwg8" (UID: "9a12ad50-ef19-4461-9d3a-7a61a30bc016") : secret "calico-apiserver-certs" not found Sep 5 14:31:20.071444 kubelet[2893]: E0905 14:31:20.071165 2893 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 5 14:31:20.071444 kubelet[2893]: E0905 14:31:20.071344 2893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc75bf4-5811-404b-98a1-3a2f0eb72f87-calico-apiserver-certs podName:9cc75bf4-5811-404b-98a1-3a2f0eb72f87 nodeName:}" failed. No retries permitted until 2024-09-05 14:31:20.57127774 +0000 UTC m=+68.433436448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9cc75bf4-5811-404b-98a1-3a2f0eb72f87-calico-apiserver-certs") pod "calico-apiserver-7bb94bdcff-glkh5" (UID: "9cc75bf4-5811-404b-98a1-3a2f0eb72f87") : secret "calico-apiserver-certs" not found Sep 5 14:31:20.674473 containerd[1548]: time="2024-09-05T14:31:20.674342056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb94bdcff-vvwg8,Uid:9a12ad50-ef19-4461-9d3a-7a61a30bc016,Namespace:calico-apiserver,Attempt:0,}" Sep 5 14:31:20.678726 containerd[1548]: time="2024-09-05T14:31:20.678638873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb94bdcff-glkh5,Uid:9cc75bf4-5811-404b-98a1-3a2f0eb72f87,Namespace:calico-apiserver,Attempt:0,}" Sep 5 14:31:20.823794 systemd-networkd[1338]: caliaed0e551d28: Link UP Sep 5 14:31:20.823901 systemd-networkd[1338]: caliaed0e551d28: Gained carrier Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.790 [INFO][5823] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0 calico-apiserver-7bb94bdcff- calico-apiserver 9cc75bf4-5811-404b-98a1-3a2f0eb72f87 865 0 2024-09-05 14:31:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb94bdcff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-f4c57b7dbd calico-apiserver-7bb94bdcff-glkh5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaed0e551d28 [] []}} ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.790 [INFO][5823] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.804 [INFO][5868] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" HandleID="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.809 [INFO][5868] ipam_plugin.go 270: Auto assigning IP ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" HandleID="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-f4c57b7dbd", "pod":"calico-apiserver-7bb94bdcff-glkh5", "timestamp":"2024-09-05 14:31:20.804506408 +0000 UTC"}, Hostname:"ci-4054.1.0-a-f4c57b7dbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.809 [INFO][5868] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.809 [INFO][5868] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.809 [INFO][5868] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-f4c57b7dbd' Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.810 [INFO][5868] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.812 [INFO][5868] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.814 [INFO][5868] ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.815 [INFO][5868] ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.817 [INFO][5868] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.817 [INFO][5868] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.817 [INFO][5868] ipam.go 1685: Creating new handle: k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7 Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.819 [INFO][5868] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5868] ipam.go 1216: Successfully claimed IPs: [192.168.16.69/26] block=192.168.16.64/26 handle="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5868] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.69/26] handle="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5868] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:20.827847 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5868] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.16.69/26] IPv6=[] ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" HandleID="k8s-pod-network.9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" Sep 5 14:31:20.828295 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5823] k8s.go 386: Populated endpoint ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0", GenerateName:"calico-apiserver-7bb94bdcff-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cc75bf4-5811-404b-98a1-3a2f0eb72f87", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb94bdcff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"", Pod:"calico-apiserver-7bb94bdcff-glkh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaed0e551d28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:20.828295 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5823] k8s.go 387: Calico CNI using IPs: [192.168.16.69/32] ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" Sep 5 14:31:20.828295 containerd[1548]: 2024-09-05 14:31:20.823 [INFO][5823] dataplane_linux.go 68: Setting the host side veth name to caliaed0e551d28 ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" Sep 5 14:31:20.828295 containerd[1548]: 2024-09-05 14:31:20.823 [INFO][5823] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" Sep 5 14:31:20.828295 containerd[1548]: 2024-09-05 14:31:20.823 [INFO][5823] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0", GenerateName:"calico-apiserver-7bb94bdcff-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cc75bf4-5811-404b-98a1-3a2f0eb72f87", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb94bdcff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7", Pod:"calico-apiserver-7bb94bdcff-glkh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaed0e551d28", MAC:"82:3c:14:4d:1e:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:20.828295 containerd[1548]: 2024-09-05 14:31:20.827 [INFO][5823] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-glkh5" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--glkh5-eth0" Sep 5 14:31:20.837379 containerd[1548]: time="2024-09-05T14:31:20.837330299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:31:20.837589 containerd[1548]: time="2024-09-05T14:31:20.837570346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:31:20.837633 containerd[1548]: time="2024-09-05T14:31:20.837590888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:31:20.837675 containerd[1548]: time="2024-09-05T14:31:20.837656778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:31:20.839083 systemd-networkd[1338]: cali8042dd6546e: Link UP Sep 5 14:31:20.839190 systemd-networkd[1338]: cali8042dd6546e: Gained carrier Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.790 [INFO][5821] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0 calico-apiserver-7bb94bdcff- calico-apiserver 9a12ad50-ef19-4461-9d3a-7a61a30bc016 862 0 2024-09-05 14:31:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb94bdcff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054.1.0-a-f4c57b7dbd calico-apiserver-7bb94bdcff-vvwg8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8042dd6546e [] []}} ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.790 [INFO][5821] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.804 [INFO][5869] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" HandleID="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.810 [INFO][5869] ipam_plugin.go 270: Auto assigning IP ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" HandleID="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000326360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054.1.0-a-f4c57b7dbd", "pod":"calico-apiserver-7bb94bdcff-vvwg8", "timestamp":"2024-09-05 14:31:20.804854568 +0000 UTC"}, Hostname:"ci-4054.1.0-a-f4c57b7dbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.810 [INFO][5869] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5869] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.822 [INFO][5869] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054.1.0-a-f4c57b7dbd' Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.823 [INFO][5869] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.825 [INFO][5869] ipam.go 372: Looking up existing affinities for host host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.829 [INFO][5869] ipam.go 489: Trying affinity for 192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.830 [INFO][5869] ipam.go 155: Attempting to load block cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.831 [INFO][5869] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.831 [INFO][5869] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.832 [INFO][5869] ipam.go 1685: Creating new handle: k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.834 [INFO][5869] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.837 [INFO][5869] ipam.go 1216: Successfully claimed IPs: [192.168.16.70/26] block=192.168.16.64/26 handle="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.837 [INFO][5869] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.70/26] handle="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" host="ci-4054.1.0-a-f4c57b7dbd" Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.837 [INFO][5869] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 5 14:31:20.843799 containerd[1548]: 2024-09-05 14:31:20.837 [INFO][5869] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.16.70/26] IPv6=[] ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" HandleID="k8s-pod-network.eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Workload="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" Sep 5 14:31:20.844242 containerd[1548]: 2024-09-05 14:31:20.838 [INFO][5821] k8s.go 386: Populated endpoint ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0", GenerateName:"calico-apiserver-7bb94bdcff-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a12ad50-ef19-4461-9d3a-7a61a30bc016", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb94bdcff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"", Pod:"calico-apiserver-7bb94bdcff-vvwg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8042dd6546e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:20.844242 containerd[1548]: 2024-09-05 14:31:20.838 [INFO][5821] k8s.go 387: Calico CNI using IPs: [192.168.16.70/32] ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" Sep 5 14:31:20.844242 containerd[1548]: 2024-09-05 14:31:20.838 [INFO][5821] dataplane_linux.go 68: Setting the host side veth name to cali8042dd6546e ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" Sep 5 14:31:20.844242 containerd[1548]: 2024-09-05 14:31:20.839 [INFO][5821] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" Sep 5 14:31:20.844242 containerd[1548]: 2024-09-05 14:31:20.839 [INFO][5821] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0", GenerateName:"calico-apiserver-7bb94bdcff-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a12ad50-ef19-4461-9d3a-7a61a30bc016", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 5, 14, 31, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb94bdcff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054.1.0-a-f4c57b7dbd", ContainerID:"eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd", Pod:"calico-apiserver-7bb94bdcff-vvwg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8042dd6546e", MAC:"8e:d3:9a:b3:d7:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 5 14:31:20.844242 containerd[1548]: 2024-09-05 14:31:20.842 [INFO][5821] k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd" Namespace="calico-apiserver" Pod="calico-apiserver-7bb94bdcff-vvwg8" WorkloadEndpoint="ci--4054.1.0--a--f4c57b7dbd-k8s-calico--apiserver--7bb94bdcff--vvwg8-eth0" Sep 5 14:31:20.853146 containerd[1548]: time="2024-09-05T14:31:20.853107179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 14:31:20.853146 containerd[1548]: time="2024-09-05T14:31:20.853132664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 14:31:20.853146 containerd[1548]: time="2024-09-05T14:31:20.853139589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:31:20.853259 containerd[1548]: time="2024-09-05T14:31:20.853181951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 14:31:20.859455 systemd[1]: Started cri-containerd-9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7.scope - libcontainer container 9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7. Sep 5 14:31:20.861138 systemd[1]: Started cri-containerd-eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd.scope - libcontainer container eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd. Sep 5 14:31:20.881645 containerd[1548]: time="2024-09-05T14:31:20.881621975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb94bdcff-glkh5,Uid:9cc75bf4-5811-404b-98a1-3a2f0eb72f87,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7\"" Sep 5 14:31:20.882311 containerd[1548]: time="2024-09-05T14:31:20.882297132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 5 14:31:20.883008 containerd[1548]: time="2024-09-05T14:31:20.882990705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb94bdcff-vvwg8,Uid:9a12ad50-ef19-4461-9d3a-7a61a30bc016,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd\"" Sep 5 14:31:22.120582 systemd-networkd[1338]: caliaed0e551d28: Gained IPv6LL Sep 5 14:31:22.184528 systemd-networkd[1338]: cali8042dd6546e: Gained IPv6LL Sep 5 14:31:23.156131 containerd[1548]: time="2024-09-05T14:31:23.156076861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:31:23.156336 containerd[1548]: time="2024-09-05T14:31:23.156277155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 5 14:31:23.156650 containerd[1548]: time="2024-09-05T14:31:23.156609355Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:31:23.157809 containerd[1548]: time="2024-09-05T14:31:23.157769755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:31:23.158246 containerd[1548]: time="2024-09-05T14:31:23.158203865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.275888507s" Sep 5 14:31:23.158246 containerd[1548]: time="2024-09-05T14:31:23.158219587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 5 14:31:23.158568 containerd[1548]: time="2024-09-05T14:31:23.158521994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 5 14:31:23.159158 containerd[1548]: time="2024-09-05T14:31:23.159111078Z" level=info msg="CreateContainer within sandbox \"9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 14:31:23.163068 containerd[1548]: time="2024-09-05T14:31:23.163027264Z" level=info msg="CreateContainer within sandbox \"9a7de6c6a09a3656177713f02ffd73374b59e07fb347c38ae596ff3bb9974cc7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c35d8bbdff2ff1f4dd9ed7dee188e2b75a522426886eb419cee1e0e6aa14810\"" Sep 5 14:31:23.163227 containerd[1548]: time="2024-09-05T14:31:23.163216286Z" level=info msg="StartContainer for \"9c35d8bbdff2ff1f4dd9ed7dee188e2b75a522426886eb419cee1e0e6aa14810\"" Sep 5 14:31:23.189447 systemd[1]: Started cri-containerd-9c35d8bbdff2ff1f4dd9ed7dee188e2b75a522426886eb419cee1e0e6aa14810.scope - libcontainer container 9c35d8bbdff2ff1f4dd9ed7dee188e2b75a522426886eb419cee1e0e6aa14810. Sep 5 14:31:23.213010 containerd[1548]: time="2024-09-05T14:31:23.212984373Z" level=info msg="StartContainer for \"9c35d8bbdff2ff1f4dd9ed7dee188e2b75a522426886eb419cee1e0e6aa14810\" returns successfully" Sep 5 14:31:23.433696 kubelet[2893]: I0905 14:31:23.433618 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bb94bdcff-glkh5" podStartSLOduration=2.157327357 podCreationTimestamp="2024-09-05 14:31:19 +0000 UTC" firstStartedPulling="2024-09-05 14:31:20.882153122 +0000 UTC m=+68.744311760" lastFinishedPulling="2024-09-05 14:31:23.158417112 +0000 UTC m=+71.020575751" observedRunningTime="2024-09-05 14:31:23.433178795 +0000 UTC m=+71.295337434" watchObservedRunningTime="2024-09-05 14:31:23.433591348 +0000 UTC m=+71.295749984" Sep 5 14:31:23.545267 containerd[1548]: time="2024-09-05T14:31:23.545241812Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 14:31:23.545546 containerd[1548]: time="2024-09-05T14:31:23.545478644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 5 14:31:23.546857 containerd[1548]: time="2024-09-05T14:31:23.546840307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 388.299209ms" Sep 5 14:31:23.546889 containerd[1548]: time="2024-09-05T14:31:23.546860020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 5 14:31:23.547768 containerd[1548]: time="2024-09-05T14:31:23.547756348Z" level=info msg="CreateContainer within sandbox \"eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 14:31:23.552245 containerd[1548]: time="2024-09-05T14:31:23.552195085Z" level=info msg="CreateContainer within sandbox \"eb1b9a8f6d448f539e30751cf27a59bb5c530a7dd6dc14538947e3d1f2e92cfd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1918d4064787e5e97cdb041f7cac05218b730c0f7fbe4370d015c26b05661a0\"" Sep 5 14:31:23.552495 containerd[1548]: time="2024-09-05T14:31:23.552481004Z" level=info msg="StartContainer for \"f1918d4064787e5e97cdb041f7cac05218b730c0f7fbe4370d015c26b05661a0\"" Sep 5 14:31:23.574591 systemd[1]: Started cri-containerd-f1918d4064787e5e97cdb041f7cac05218b730c0f7fbe4370d015c26b05661a0.scope - libcontainer container f1918d4064787e5e97cdb041f7cac05218b730c0f7fbe4370d015c26b05661a0. Sep 5 14:31:23.597603 containerd[1548]: time="2024-09-05T14:31:23.597577173Z" level=info msg="StartContainer for \"f1918d4064787e5e97cdb041f7cac05218b730c0f7fbe4370d015c26b05661a0\" returns successfully" Sep 5 14:31:24.451077 kubelet[2893]: I0905 14:31:24.451028 2893 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bb94bdcff-vvwg8" podStartSLOduration=2.787431673 podCreationTimestamp="2024-09-05 14:31:19 +0000 UTC" firstStartedPulling="2024-09-05 14:31:20.883459418 +0000 UTC m=+68.745618056" lastFinishedPulling="2024-09-05 14:31:23.54699512 +0000 UTC m=+71.409153762" observedRunningTime="2024-09-05 14:31:24.450496785 +0000 UTC m=+72.312655469" watchObservedRunningTime="2024-09-05 14:31:24.450967379 +0000 UTC m=+72.313126036" Sep 5 14:32:33.210132 update_engine[1535]: I0905 14:32:33.210016 1535 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 5 14:32:33.210132 update_engine[1535]: I0905 14:32:33.210098 1535 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 5 14:32:33.211442 update_engine[1535]: I0905 14:32:33.210510 1535 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 5 14:32:33.211734 update_engine[1535]: I0905 14:32:33.211645 1535 omaha_request_params.cc:62] Current group set to beta Sep 5 14:32:33.212036 update_engine[1535]: I0905 14:32:33.211954 1535 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 5 14:32:33.212036 update_engine[1535]: I0905 14:32:33.211983 1535 update_attempter.cc:643] Scheduling an action processor start. Sep 5 14:32:33.212036 update_engine[1535]: I0905 14:32:33.212025 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 14:32:33.212470 update_engine[1535]: I0905 14:32:33.212127 1535 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 5 14:32:33.212470 update_engine[1535]: I0905 14:32:33.212378 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 14:32:33.212470 update_engine[1535]: I0905 14:32:33.212400 1535 omaha_request_action.cc:272] Request: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: Sep 5 14:32:33.212470 update_engine[1535]: I0905 14:32:33.212412 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 14:32:33.213586 locksmithd[1569]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 5 14:32:33.214964 update_engine[1535]: I0905 14:32:33.214921 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 14:32:33.215093 update_engine[1535]: I0905 14:32:33.215059 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 14:32:33.215850 update_engine[1535]: E0905 14:32:33.215812 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 14:32:33.215850 update_engine[1535]: I0905 14:32:33.215840 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 5 14:32:43.190760 update_engine[1535]: I0905 14:32:43.190537 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 14:32:43.191771 update_engine[1535]: I0905 14:32:43.191054 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 14:32:43.191771 update_engine[1535]: I0905 14:32:43.191586 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 14:32:43.192397 update_engine[1535]: E0905 14:32:43.192340 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 14:32:43.192581 update_engine[1535]: I0905 14:32:43.192461 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 5 14:32:53.182830 update_engine[1535]: I0905 14:32:53.182703 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 14:32:53.183844 update_engine[1535]: I0905 14:32:53.183226 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 14:32:53.183844 update_engine[1535]: I0905 14:32:53.183779 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 14:32:53.184677 update_engine[1535]: E0905 14:32:53.184587 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 14:32:53.184863 update_engine[1535]: I0905 14:32:53.184710 1535 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 5 14:33:03.183983 update_engine[1535]: I0905 14:33:03.183852 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 14:33:03.185145 update_engine[1535]: I0905 14:33:03.184429 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 14:33:03.185145 update_engine[1535]: I0905 14:33:03.184931 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 14:33:03.185823 update_engine[1535]: E0905 14:33:03.185734 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 14:33:03.186023 update_engine[1535]: I0905 14:33:03.185848 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 14:33:03.186023 update_engine[1535]: I0905 14:33:03.185865 1535 omaha_request_action.cc:617] Omaha request response: Sep 5 14:33:03.186211 update_engine[1535]: E0905 14:33:03.186026 1535 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 5 14:33:03.186211 update_engine[1535]: I0905 14:33:03.186067 1535 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 5 14:33:03.186211 update_engine[1535]: I0905 14:33:03.186079 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 14:33:03.186211 update_engine[1535]: I0905 14:33:03.186085 1535 update_attempter.cc:306] Processing Done. Sep 5 14:33:03.186211 update_engine[1535]: E0905 14:33:03.186112 1535 update_attempter.cc:619] Update failed. Sep 5 14:33:03.186211 update_engine[1535]: I0905 14:33:03.186122 1535 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 5 14:33:03.186211 update_engine[1535]: I0905 14:33:03.186131 1535 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 5 14:33:03.186211 update_engine[1535]: I0905 14:33:03.186139 1535 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 5 14:33:03.187025 update_engine[1535]: I0905 14:33:03.186318 1535 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 14:33:03.187025 update_engine[1535]: I0905 14:33:03.186386 1535 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 14:33:03.187025 update_engine[1535]: I0905 14:33:03.186405 1535 omaha_request_action.cc:272] Request: Sep 5 14:33:03.187025 update_engine[1535]: Sep 5 14:33:03.187025 update_engine[1535]: Sep 5 14:33:03.187025 update_engine[1535]: Sep 5 14:33:03.187025 update_engine[1535]: Sep 5 14:33:03.187025 update_engine[1535]: Sep 5 14:33:03.187025 update_engine[1535]: Sep 5 14:33:03.187025 update_engine[1535]: I0905 14:33:03.186421 1535 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 14:33:03.187025 update_engine[1535]: I0905 14:33:03.186918 1535 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 14:33:03.187940 locksmithd[1569]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.187366 1535 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 14:33:03.188601 update_engine[1535]: E0905 14:33:03.188086 1535 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.188189 1535 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.188205 1535 omaha_request_action.cc:617] Omaha request response: Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.188217 1535 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.188225 1535 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.188232 1535 update_attempter.cc:306] Processing Done. Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.188242 1535 update_attempter.cc:310] Error event sent. Sep 5 14:33:03.188601 update_engine[1535]: I0905 14:33:03.188258 1535 update_check_scheduler.cc:74] Next update check in 43m26s Sep 5 14:33:03.189412 locksmithd[1569]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 5 14:44:28.018540 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Sep 5 14:44:28.029705 systemd-tmpfiles[8257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 14:44:28.029956 systemd-tmpfiles[8257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 14:44:28.030401 systemd-tmpfiles[8257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 14:44:28.030558 systemd-tmpfiles[8257]: ACLs are not supported, ignoring. Sep 5 14:44:28.030591 systemd-tmpfiles[8257]: ACLs are not supported, ignoring. Sep 5 14:44:28.036698 systemd-tmpfiles[8257]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 14:44:28.036702 systemd-tmpfiles[8257]: Skipping /boot Sep 5 14:44:28.039386 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Sep 5 14:44:28.039499 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Sep 5 14:44:53.908054 systemd[1]: Started sshd@7-147.75.90.7:22-139.178.89.65:44508.service - OpenSSH per-connection server daemon (139.178.89.65:44508). Sep 5 14:44:53.949724 sshd[8323]: Accepted publickey for core from 139.178.89.65 port 44508 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:44:53.953564 sshd[8323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:44:53.965819 systemd-logind[1530]: New session 10 of user core. Sep 5 14:44:53.989738 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 14:44:54.123356 sshd[8323]: pam_unix(sshd:session): session closed for user core Sep 5 14:44:54.126088 systemd[1]: sshd@7-147.75.90.7:22-139.178.89.65:44508.service: Deactivated successfully. Sep 5 14:44:54.127912 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 14:44:54.129329 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Sep 5 14:44:54.130405 systemd-logind[1530]: Removed session 10. Sep 5 14:44:59.146983 systemd[1]: Started sshd@8-147.75.90.7:22-139.178.89.65:42592.service - OpenSSH per-connection server daemon (139.178.89.65:42592). Sep 5 14:44:59.176313 sshd[8382]: Accepted publickey for core from 139.178.89.65 port 42592 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:44:59.177171 sshd[8382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:44:59.180265 systemd-logind[1530]: New session 11 of user core. Sep 5 14:44:59.191515 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 14:44:59.278223 sshd[8382]: pam_unix(sshd:session): session closed for user core Sep 5 14:44:59.279908 systemd[1]: sshd@8-147.75.90.7:22-139.178.89.65:42592.service: Deactivated successfully. Sep 5 14:44:59.280977 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 14:44:59.281841 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Sep 5 14:44:59.282512 systemd-logind[1530]: Removed session 11. Sep 5 14:45:04.325788 systemd[1]: Started sshd@9-147.75.90.7:22-139.178.89.65:42608.service - OpenSSH per-connection server daemon (139.178.89.65:42608). Sep 5 14:45:04.353839 sshd[8415]: Accepted publickey for core from 139.178.89.65 port 42608 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:04.354711 sshd[8415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:04.358023 systemd-logind[1530]: New session 12 of user core. Sep 5 14:45:04.373507 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 14:45:04.460443 sshd[8415]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:04.474186 systemd[1]: sshd@9-147.75.90.7:22-139.178.89.65:42608.service: Deactivated successfully. Sep 5 14:45:04.475184 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 14:45:04.476089 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Sep 5 14:45:04.477000 systemd[1]: Started sshd@10-147.75.90.7:22-139.178.89.65:42612.service - OpenSSH per-connection server daemon (139.178.89.65:42612). Sep 5 14:45:04.477640 systemd-logind[1530]: Removed session 12. Sep 5 14:45:04.510656 sshd[8442]: Accepted publickey for core from 139.178.89.65 port 42612 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:04.511906 sshd[8442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:04.516050 systemd-logind[1530]: New session 13 of user core. Sep 5 14:45:04.530574 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 14:45:05.021942 sshd[8442]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:05.032128 systemd[1]: sshd@10-147.75.90.7:22-139.178.89.65:42612.service: Deactivated successfully. Sep 5 14:45:05.033122 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 14:45:05.033921 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Sep 5 14:45:05.034592 systemd[1]: Started sshd@11-147.75.90.7:22-139.178.89.65:42616.service - OpenSSH per-connection server daemon (139.178.89.65:42616). Sep 5 14:45:05.035039 systemd-logind[1530]: Removed session 13. Sep 5 14:45:05.064460 sshd[8467]: Accepted publickey for core from 139.178.89.65 port 42616 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:05.065430 sshd[8467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:05.068665 systemd-logind[1530]: New session 14 of user core. Sep 5 14:45:05.089480 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 14:45:05.220649 sshd[8467]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:05.222304 systemd[1]: sshd@11-147.75.90.7:22-139.178.89.65:42616.service: Deactivated successfully. Sep 5 14:45:05.223342 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 14:45:05.224156 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Sep 5 14:45:05.224944 systemd-logind[1530]: Removed session 14. Sep 5 14:45:10.243685 systemd[1]: Started sshd@12-147.75.90.7:22-139.178.89.65:38404.service - OpenSSH per-connection server daemon (139.178.89.65:38404). Sep 5 14:45:10.272753 sshd[8516]: Accepted publickey for core from 139.178.89.65 port 38404 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:10.273606 sshd[8516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:10.276920 systemd-logind[1530]: New session 15 of user core. Sep 5 14:45:10.293438 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 14:45:10.378757 sshd[8516]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:10.380630 systemd[1]: sshd@12-147.75.90.7:22-139.178.89.65:38404.service: Deactivated successfully. Sep 5 14:45:10.381785 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 14:45:10.382703 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Sep 5 14:45:10.383438 systemd-logind[1530]: Removed session 15. Sep 5 14:45:15.399235 systemd[1]: Started sshd@13-147.75.90.7:22-139.178.89.65:38414.service - OpenSSH per-connection server daemon (139.178.89.65:38414). Sep 5 14:45:15.429369 sshd[8549]: Accepted publickey for core from 139.178.89.65 port 38414 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:15.432915 sshd[8549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:15.444469 systemd-logind[1530]: New session 16 of user core. Sep 5 14:45:15.465791 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 14:45:15.561018 sshd[8549]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:15.562853 systemd[1]: sshd@13-147.75.90.7:22-139.178.89.65:38414.service: Deactivated successfully. Sep 5 14:45:15.563970 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 14:45:15.564848 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Sep 5 14:45:15.565455 systemd-logind[1530]: Removed session 16. Sep 5 14:45:20.584688 systemd[1]: Started sshd@14-147.75.90.7:22-139.178.89.65:56616.service - OpenSSH per-connection server daemon (139.178.89.65:56616). Sep 5 14:45:20.616445 sshd[8601]: Accepted publickey for core from 139.178.89.65 port 56616 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:20.617215 sshd[8601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:20.620062 systemd-logind[1530]: New session 17 of user core. Sep 5 14:45:20.638523 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 14:45:20.721647 sshd[8601]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:20.723369 systemd[1]: sshd@14-147.75.90.7:22-139.178.89.65:56616.service: Deactivated successfully. Sep 5 14:45:20.724398 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 14:45:20.725227 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Sep 5 14:45:20.726013 systemd-logind[1530]: Removed session 17. Sep 5 14:45:25.747225 systemd[1]: Started sshd@15-147.75.90.7:22-139.178.89.65:56618.service - OpenSSH per-connection server daemon (139.178.89.65:56618). Sep 5 14:45:25.776548 sshd[8634]: Accepted publickey for core from 139.178.89.65 port 56618 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:25.777440 sshd[8634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:25.780754 systemd-logind[1530]: New session 18 of user core. Sep 5 14:45:25.781568 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 14:45:25.869867 sshd[8634]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:25.888181 systemd[1]: sshd@15-147.75.90.7:22-139.178.89.65:56618.service: Deactivated successfully. Sep 5 14:45:25.889247 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 14:45:25.890157 systemd-logind[1530]: Session 18 logged out. Waiting for processes to exit. Sep 5 14:45:25.891150 systemd[1]: Started sshd@16-147.75.90.7:22-139.178.89.65:56622.service - OpenSSH per-connection server daemon (139.178.89.65:56622). Sep 5 14:45:25.891857 systemd-logind[1530]: Removed session 18. Sep 5 14:45:25.926869 sshd[8659]: Accepted publickey for core from 139.178.89.65 port 56622 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:25.928253 sshd[8659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:25.933472 systemd-logind[1530]: New session 19 of user core. Sep 5 14:45:25.948773 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 14:45:26.251851 sshd[8659]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:26.265972 systemd[1]: sshd@16-147.75.90.7:22-139.178.89.65:56622.service: Deactivated successfully. Sep 5 14:45:26.267400 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 14:45:26.268591 systemd-logind[1530]: Session 19 logged out. Waiting for processes to exit. Sep 5 14:45:26.269721 systemd[1]: Started sshd@17-147.75.90.7:22-139.178.89.65:56638.service - OpenSSH per-connection server daemon (139.178.89.65:56638). Sep 5 14:45:26.270594 systemd-logind[1530]: Removed session 19. Sep 5 14:45:26.307751 sshd[8683]: Accepted publickey for core from 139.178.89.65 port 56638 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:26.308430 sshd[8683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:26.311187 systemd-logind[1530]: New session 20 of user core. Sep 5 14:45:26.328587 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 14:45:27.211898 sshd[8683]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:27.235970 systemd[1]: sshd@17-147.75.90.7:22-139.178.89.65:56638.service: Deactivated successfully. Sep 5 14:45:27.240579 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 14:45:27.244127 systemd-logind[1530]: Session 20 logged out. Waiting for processes to exit. Sep 5 14:45:27.271138 systemd[1]: Started sshd@18-147.75.90.7:22-139.178.89.65:56654.service - OpenSSH per-connection server daemon (139.178.89.65:56654). Sep 5 14:45:27.273949 systemd-logind[1530]: Removed session 20. Sep 5 14:45:27.332023 sshd[8715]: Accepted publickey for core from 139.178.89.65 port 56654 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:27.335076 sshd[8715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:27.343372 systemd-logind[1530]: New session 21 of user core. Sep 5 14:45:27.356769 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 14:45:27.641513 sshd[8715]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:27.667123 systemd[1]: sshd@18-147.75.90.7:22-139.178.89.65:56654.service: Deactivated successfully. Sep 5 14:45:27.671241 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 14:45:27.674863 systemd-logind[1530]: Session 21 logged out. Waiting for processes to exit. Sep 5 14:45:27.689022 systemd[1]: Started sshd@19-147.75.90.7:22-139.178.89.65:56992.service - OpenSSH per-connection server daemon (139.178.89.65:56992). Sep 5 14:45:27.691658 systemd-logind[1530]: Removed session 21. Sep 5 14:45:27.741226 sshd[8744]: Accepted publickey for core from 139.178.89.65 port 56992 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:27.742675 sshd[8744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:27.747632 systemd-logind[1530]: New session 22 of user core. Sep 5 14:45:27.758539 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 14:45:27.888018 sshd[8744]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:27.889637 systemd[1]: sshd@19-147.75.90.7:22-139.178.89.65:56992.service: Deactivated successfully. Sep 5 14:45:27.890675 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 14:45:27.891492 systemd-logind[1530]: Session 22 logged out. Waiting for processes to exit. Sep 5 14:45:27.892142 systemd-logind[1530]: Removed session 22. Sep 5 14:45:32.905153 systemd[1]: Started sshd@20-147.75.90.7:22-139.178.89.65:57002.service - OpenSSH per-connection server daemon (139.178.89.65:57002). Sep 5 14:45:32.934445 sshd[8798]: Accepted publickey for core from 139.178.89.65 port 57002 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:32.935225 sshd[8798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:32.938218 systemd-logind[1530]: New session 23 of user core. Sep 5 14:45:32.947736 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 14:45:33.041226 sshd[8798]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:33.042877 systemd[1]: sshd@20-147.75.90.7:22-139.178.89.65:57002.service: Deactivated successfully. Sep 5 14:45:33.043937 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 14:45:33.044643 systemd-logind[1530]: Session 23 logged out. Waiting for processes to exit. Sep 5 14:45:33.045159 systemd-logind[1530]: Removed session 23. Sep 5 14:45:38.058116 systemd[1]: Started sshd@21-147.75.90.7:22-139.178.89.65:42842.service - OpenSSH per-connection server daemon (139.178.89.65:42842). Sep 5 14:45:38.087493 sshd[8824]: Accepted publickey for core from 139.178.89.65 port 42842 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:38.088560 sshd[8824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:38.091957 systemd-logind[1530]: New session 24 of user core. Sep 5 14:45:38.107437 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 14:45:38.192906 sshd[8824]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:38.194830 systemd[1]: sshd@21-147.75.90.7:22-139.178.89.65:42842.service: Deactivated successfully. Sep 5 14:45:38.195670 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 14:45:38.196119 systemd-logind[1530]: Session 24 logged out. Waiting for processes to exit. Sep 5 14:45:38.196810 systemd-logind[1530]: Removed session 24. Sep 5 14:45:43.228549 systemd[1]: Started sshd@22-147.75.90.7:22-139.178.89.65:42854.service - OpenSSH per-connection server daemon (139.178.89.65:42854). Sep 5 14:45:43.255839 sshd[8857]: Accepted publickey for core from 139.178.89.65 port 42854 ssh2: RSA SHA256:9al2mF+0RnLNW/16JOLAh/6mtQaj7lMEk2LW/k4wAIc Sep 5 14:45:43.256814 sshd[8857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 14:45:43.260106 systemd-logind[1530]: New session 25 of user core. Sep 5 14:45:43.274489 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 14:45:43.360422 sshd[8857]: pam_unix(sshd:session): session closed for user core Sep 5 14:45:43.362124 systemd[1]: sshd@22-147.75.90.7:22-139.178.89.65:42854.service: Deactivated successfully. Sep 5 14:45:43.363178 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 14:45:43.363935 systemd-logind[1530]: Session 25 logged out. Waiting for processes to exit. Sep 5 14:45:43.364494 systemd-logind[1530]: Removed session 25.