Dec 13 01:56:46.019961 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27
Dec 13 01:56:46.019974 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024
Dec 13 01:56:46.019980 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 01:56:46.019986 kernel: BIOS-provided physical RAM map:
Dec 13 01:56:46.019990 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable
Dec 13 01:56:46.019994 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved
Dec 13 01:56:46.019998 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
Dec 13 01:56:46.020002 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable
Dec 13 01:56:46.020006 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved
Dec 13 01:56:46.020010 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable
Dec 13 01:56:46.020014 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS
Dec 13 01:56:46.020019 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved
Dec 13 01:56:46.020023 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable
Dec 13 01:56:46.020027 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved
Dec 13 01:56:46.020032 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable
Dec 13 01:56:46.020037 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS
Dec 13 01:56:46.020043 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved
Dec 13 01:56:46.020047 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable
Dec 13 01:56:46.020052 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved
Dec 13 01:56:46.020056 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Dec 13 01:56:46.020061 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
Dec 13 01:56:46.020065 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Dec 13 01:56:46.020070 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Dec 13 01:56:46.020074 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Dec 13 01:56:46.020079 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable
Dec 13 01:56:46.020083 kernel: NX (Execute Disable) protection: active
Dec 13 01:56:46.020088 kernel: APIC: Static calls initialized
Dec 13 01:56:46.020093 kernel: SMBIOS 3.2.1 present.
Dec 13 01:56:46.020098 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022
Dec 13 01:56:46.020103 kernel: tsc: Detected 3400.000 MHz processor
Dec 13 01:56:46.020108 kernel: tsc: Detected 3399.906 MHz TSC
Dec 13 01:56:46.020112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 01:56:46.020118 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 01:56:46.020122 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000
Dec 13 01:56:46.020127 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs
Dec 13 01:56:46.020132 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 01:56:46.020136 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000
Dec 13 01:56:46.020142 kernel: Using GB pages for direct mapping
Dec 13 01:56:46.020147 kernel: ACPI: Early table checksum verification disabled
Dec 13 01:56:46.020152 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM)
Dec 13 01:56:46.020159 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM   01072009 AMI  00010013)
Dec 13 01:56:46.020164 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06                 01072009 AMI  00010013)
Dec 13 01:56:46.020169 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527)
Dec 13 01:56:46.020174 kernel: ACPI: FACS 0x000000008C66CF80 000040
Dec 13 01:56:46.020180 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04                 01072009 AMI  00010013)
Dec 13 01:56:46.020185 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01                 01072009 AMI  00010013)
Dec 13 01:56:46.020190 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI  00010013)
Dec 13 01:56:46.020194 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097)
Dec 13 01:56:46.020199 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000)
Dec 13 01:56:46.020204 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt  00003000 INTL 20160527)
Dec 13 01:56:46.020209 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt   00003000 INTL 20160527)
Dec 13 01:56:46.020215 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt  00001000 INTL 20160527)
Dec 13 01:56:46.020220 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020225 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527)
Dec 13 01:56:46.020230 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL  xh_mossb 00000000 INTL 20160527)
Dec 13 01:56:46.020235 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020240 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020245 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527)
Dec 13 01:56:46.020249 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527)
Dec 13 01:56:46.020254 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020260 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020265 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527)
Dec 13 01:56:46.020270 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL  EDK2     00000002      01000013)
Dec 13 01:56:46.020275 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel  ADebTabl 00001000 INTL 20160527)
Dec 13 01:56:46.020280 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI  00000000)
Dec 13 01:56:46.020285 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL  SpsNm    00000002 INTL 20160527)
Dec 13 01:56:46.020290 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM          01072009 AMI  00010013)
Dec 13 01:56:46.020295 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI    AMI.EINJ 00000000 AMI. 00000000)
Dec 13 01:56:46.020301 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER  AMI.ERST 00000000 AMI. 00000000)
Dec 13 01:56:46.020306 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI    AMI.BERT 00000000 AMI. 00000000)
Dec 13 01:56:46.020311 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI    AMI.HEST 00000000 AMI. 00000000)
Dec 13 01:56:46.020315 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN   00000000 INTL 20181221)
Dec 13 01:56:46.020320 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783]
Dec 13 01:56:46.020325 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b]
Dec 13 01:56:46.020330 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf]
Dec 13 01:56:46.020335 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3]
Dec 13 01:56:46.020340 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb]
Dec 13 01:56:46.020346 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b]
Dec 13 01:56:46.020351 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db]
Dec 13 01:56:46.020356 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20]
Dec 13 01:56:46.020361 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543]
Dec 13 01:56:46.020366 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d]
Dec 13 01:56:46.020370 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a]
Dec 13 01:56:46.020375 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77]
Dec 13 01:56:46.020380 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25]
Dec 13 01:56:46.020385 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b]
Dec 13 01:56:46.020391 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361]
Dec 13 01:56:46.020396 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb]
Dec 13 01:56:46.020401 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd]
Dec 13 01:56:46.020406 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1]
Dec 13 01:56:46.020411 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb]
Dec 13 01:56:46.020416 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153]
Dec 13 01:56:46.020420 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe]
Dec 13 01:56:46.020425 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f]
Dec 13 01:56:46.020430 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73]
Dec 13 01:56:46.020435 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab]
Dec 13 01:56:46.020441 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e]
Dec 13 01:56:46.020446 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67]
Dec 13 01:56:46.020451 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97]
Dec 13 01:56:46.020456 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7]
Dec 13 01:56:46.020460 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7]
Dec 13 01:56:46.020465 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273]
Dec 13 01:56:46.020470 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9]
Dec 13 01:56:46.020478 kernel: No NUMA configuration found
Dec 13 01:56:46.020483 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff]
Dec 13 01:56:46.020489 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff]
Dec 13 01:56:46.020494 kernel: Zone ranges:
Dec 13 01:56:46.020499 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 01:56:46.020504 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 01:56:46.020509 kernel:   Normal   [mem 0x0000000100000000-0x000000086effffff]
Dec 13 01:56:46.020514 kernel: Movable zone start for each node
Dec 13 01:56:46.020519 kernel: Early memory node ranges
Dec 13 01:56:46.020524 kernel:   node   0: [mem 0x0000000000001000-0x0000000000098fff]
Dec 13 01:56:46.020529 kernel:   node   0: [mem 0x0000000000100000-0x000000003fffffff]
Dec 13 01:56:46.020535 kernel:   node   0: [mem 0x0000000040400000-0x0000000081b25fff]
Dec 13 01:56:46.020540 kernel:   node   0: [mem 0x0000000081b28000-0x000000008afccfff]
Dec 13 01:56:46.020545 kernel:   node   0: [mem 0x000000008c0b2000-0x000000008c23afff]
Dec 13 01:56:46.020550 kernel:   node   0: [mem 0x000000008eeff000-0x000000008eefffff]
Dec 13 01:56:46.020558 kernel:   node   0: [mem 0x0000000100000000-0x000000086effffff]
Dec 13 01:56:46.020564 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff]
Dec 13 01:56:46.020569 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 01:56:46.020575 kernel: On node 0, zone DMA: 103 pages in unavailable ranges
Dec 13 01:56:46.020581 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges
Dec 13 01:56:46.020586 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges
Dec 13 01:56:46.020592 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges
Dec 13 01:56:46.020597 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges
Dec 13 01:56:46.020602 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges
Dec 13 01:56:46.020608 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges
Dec 13 01:56:46.020613 kernel: ACPI: PM-Timer IO Port: 0x1808
Dec 13 01:56:46.020618 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
Dec 13 01:56:46.020624 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
Dec 13 01:56:46.020630 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
Dec 13 01:56:46.020635 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
Dec 13 01:56:46.020640 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
Dec 13 01:56:46.020646 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
Dec 13 01:56:46.020651 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
Dec 13 01:56:46.020656 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1])
Dec 13 01:56:46.020661 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1])
Dec 13 01:56:46.020667 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1])
Dec 13 01:56:46.020672 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1])
Dec 13 01:56:46.020677 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1])
Dec 13 01:56:46.020683 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1])
Dec 13 01:56:46.020688 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1])
Dec 13 01:56:46.020694 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1])
Dec 13 01:56:46.020699 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1])
Dec 13 01:56:46.020704 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
Dec 13 01:56:46.020710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 01:56:46.020715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 01:56:46.020720 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 01:56:46.020726 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Dec 13 01:56:46.020732 kernel: TSC deadline timer available
Dec 13 01:56:46.020737 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs
Dec 13 01:56:46.020743 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices
Dec 13 01:56:46.020748 kernel: Booting paravirtualized kernel on bare hardware
Dec 13 01:56:46.020754 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 01:56:46.020759 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1
Dec 13 01:56:46.020764 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144
Dec 13 01:56:46.020770 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152
Dec 13 01:56:46.020775 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 
Dec 13 01:56:46.020781 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 01:56:46.020787 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 01:56:46.020792 kernel: random: crng init done
Dec 13 01:56:46.020798 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
Dec 13 01:56:46.020803 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Dec 13 01:56:46.020808 kernel: Fallback order for Node 0: 0 
Dec 13 01:56:46.020813 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 8232415
Dec 13 01:56:46.020819 kernel: Policy zone: Normal
Dec 13 01:56:46.020825 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 01:56:46.020830 kernel: software IO TLB: area num 16.
Dec 13 01:56:46.020836 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 732408K reserved, 0K cma-reserved)
Dec 13 01:56:46.020841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1
Dec 13 01:56:46.020847 kernel: ftrace: allocating 37902 entries in 149 pages
Dec 13 01:56:46.020852 kernel: ftrace: allocated 149 pages with 4 groups
Dec 13 01:56:46.020857 kernel: Dynamic Preempt: voluntary
Dec 13 01:56:46.020863 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 01:56:46.020868 kernel: rcu:         RCU event tracing is enabled.
Dec 13 01:56:46.020875 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16.
Dec 13 01:56:46.020880 kernel:         Trampoline variant of Tasks RCU enabled.
Dec 13 01:56:46.020885 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 01:56:46.020891 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 01:56:46.020896 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 01:56:46.020901 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16
Dec 13 01:56:46.020907 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16
Dec 13 01:56:46.020912 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 01:56:46.020917 kernel: Console: colour dummy device 80x25
Dec 13 01:56:46.020923 kernel: printk: console [tty0] enabled
Dec 13 01:56:46.020929 kernel: printk: console [ttyS1] enabled
Dec 13 01:56:46.020934 kernel: ACPI: Core revision 20230628
Dec 13 01:56:46.020939 kernel: hpet: HPET dysfunctional in PC10. Force disabled.
Dec 13 01:56:46.020945 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 01:56:46.020950 kernel: DMAR: Host address width 39
Dec 13 01:56:46.020955 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1
Dec 13 01:56:46.020961 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
Dec 13 01:56:46.020966 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff
Dec 13 01:56:46.020972 kernel: DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 0
Dec 13 01:56:46.020977 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000
Dec 13 01:56:46.020983 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
Dec 13 01:56:46.020988 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode
Dec 13 01:56:46.020993 kernel: x2apic enabled
Dec 13 01:56:46.020999 kernel: APIC: Switched APIC routing to: cluster x2apic
Dec 13 01:56:46.021004 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns
Dec 13 01:56:46.021010 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906)
Dec 13 01:56:46.021015 kernel: CPU0: Thermal monitoring enabled (TM1)
Dec 13 01:56:46.021021 kernel: process: using mwait in idle threads
Dec 13 01:56:46.021026 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Dec 13 01:56:46.021032 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Dec 13 01:56:46.021037 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 01:56:46.021042 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit
Dec 13 01:56:46.021047 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall
Dec 13 01:56:46.021053 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS
Dec 13 01:56:46.021058 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 01:56:46.021063 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT
Dec 13 01:56:46.021068 kernel: RETBleed: Mitigation: Enhanced IBRS
Dec 13 01:56:46.021074 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 01:56:46.021080 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 01:56:46.021085 kernel: TAA: Mitigation: TSX disabled
Dec 13 01:56:46.021090 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers
Dec 13 01:56:46.021096 kernel: SRBDS: Mitigation: Microcode
Dec 13 01:56:46.021101 kernel: GDS: Mitigation: Microcode
Dec 13 01:56:46.021106 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 01:56:46.021111 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 01:56:46.021117 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 01:56:46.021122 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Dec 13 01:56:46.021127 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Dec 13 01:56:46.021132 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 01:56:46.021139 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Dec 13 01:56:46.021144 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Dec 13 01:56:46.021149 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format.
Dec 13 01:56:46.021154 kernel: Freeing SMP alternatives memory: 32K
Dec 13 01:56:46.021160 kernel: pid_max: default: 32768 minimum: 301
Dec 13 01:56:46.021165 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Dec 13 01:56:46.021170 kernel: landlock: Up and running.
Dec 13 01:56:46.021175 kernel: SELinux:  Initializing.
Dec 13 01:56:46.021181 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.021186 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.021191 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd)
Dec 13 01:56:46.021197 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16.
Dec 13 01:56:46.021203 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16.
Dec 13 01:56:46.021208 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16.
Dec 13 01:56:46.021214 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver.
Dec 13 01:56:46.021219 kernel: ... version:                4
Dec 13 01:56:46.021224 kernel: ... bit width:              48
Dec 13 01:56:46.021230 kernel: ... generic registers:      4
Dec 13 01:56:46.021235 kernel: ... value mask:             0000ffffffffffff
Dec 13 01:56:46.021240 kernel: ... max period:             00007fffffffffff
Dec 13 01:56:46.021246 kernel: ... fixed-purpose events:   3
Dec 13 01:56:46.021252 kernel: ... event mask:             000000070000000f
Dec 13 01:56:46.021257 kernel: signal: max sigframe size: 2032
Dec 13 01:56:46.021262 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445
Dec 13 01:56:46.021268 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 01:56:46.021273 kernel: rcu:         Max phase no-delay instances is 400.
Dec 13 01:56:46.021278 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
Dec 13 01:56:46.021284 kernel: smp: Bringing up secondary CPUs ...
Dec 13 01:56:46.021289 kernel: smpboot: x86: Booting SMP configuration:
Dec 13 01:56:46.021295 kernel: .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7  #8  #9 #10 #11 #12 #13 #14 #15
Dec 13 01:56:46.021301 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Dec 13 01:56:46.021306 kernel: smp: Brought up 1 node, 16 CPUs
Dec 13 01:56:46.021311 kernel: smpboot: Max logical packages: 1
Dec 13 01:56:46.021317 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS)
Dec 13 01:56:46.021322 kernel: devtmpfs: initialized
Dec 13 01:56:46.021327 kernel: x86/mm: Memory block size: 128MB
Dec 13 01:56:46.021333 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes)
Dec 13 01:56:46.021338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes)
Dec 13 01:56:46.021344 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 01:56:46.021350 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear)
Dec 13 01:56:46.021355 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 01:56:46.021361 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 01:56:46.021366 kernel: audit: initializing netlink subsys (disabled)
Dec 13 01:56:46.021371 kernel: audit: type=2000 audit(1734055000.039:1): state=initialized audit_enabled=0 res=1
Dec 13 01:56:46.021376 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 01:56:46.021382 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 01:56:46.021387 kernel: cpuidle: using governor menu
Dec 13 01:56:46.021393 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 01:56:46.021398 kernel: dca service started, version 1.12.1
Dec 13 01:56:46.021404 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
Dec 13 01:56:46.021409 kernel: PCI: Using configuration type 1 for base access
Dec 13 01:56:46.021415 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
Dec 13 01:56:46.021420 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 01:56:46.021425 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 01:56:46.021431 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 01:56:46.021436 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 01:56:46.021442 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 01:56:46.021447 kernel: ACPI: Added _OSI(Module Device)
Dec 13 01:56:46.021453 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 01:56:46.021458 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 01:56:46.021463 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 01:56:46.021469 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded
Dec 13 01:56:46.021476 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021481 kernel: ACPI: SSDT 0xFFFF9F9581EC0400 000400 (v02 PmRef  Cpu0Cst  00003001 INTL 20160527)
Dec 13 01:56:46.021505 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021526 kernel: ACPI: SSDT 0xFFFF9F9581EBF800 000683 (v02 PmRef  Cpu0Ist  00003000 INTL 20160527)
Dec 13 01:56:46.021531 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021536 kernel: ACPI: SSDT 0xFFFF9F9581569100 0000F4 (v02 PmRef  Cpu0Psd  00003000 INTL 20160527)
Dec 13 01:56:46.021541 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021547 kernel: ACPI: SSDT 0xFFFF9F9581EB8800 0005FC (v02 PmRef  ApIst    00003000 INTL 20160527)
Dec 13 01:56:46.021552 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021557 kernel: ACPI: SSDT 0xFFFF9F9581ECA000 000AB0 (v02 PmRef  ApPsd    00003000 INTL 20160527)
Dec 13 01:56:46.021562 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021568 kernel: ACPI: SSDT 0xFFFF9F9581EC7400 00030A (v02 PmRef  ApCst    00003000 INTL 20160527)
Dec 13 01:56:46.021574 kernel: ACPI: _OSC evaluated successfully for all CPUs
Dec 13 01:56:46.021579 kernel: ACPI: Interpreter enabled
Dec 13 01:56:46.021584 kernel: ACPI: PM: (supports S0 S5)
Dec 13 01:56:46.021589 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 01:56:46.021595 kernel: HEST: Enabling Firmware First mode for corrected errors.
Dec 13 01:56:46.021600 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14.
Dec 13 01:56:46.021605 kernel: HEST: Table parsing has been initialized.
Dec 13 01:56:46.021611 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC.
Dec 13 01:56:46.021616 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 01:56:46.021622 kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 01:56:46.021627 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F
Dec 13 01:56:46.021633 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource
Dec 13 01:56:46.021638 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource
Dec 13 01:56:46.021644 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource
Dec 13 01:56:46.021649 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource
Dec 13 01:56:46.021654 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource
Dec 13 01:56:46.021660 kernel: ACPI: \_TZ_.FN00: New power resource
Dec 13 01:56:46.021665 kernel: ACPI: \_TZ_.FN01: New power resource
Dec 13 01:56:46.021671 kernel: ACPI: \_TZ_.FN02: New power resource
Dec 13 01:56:46.021677 kernel: ACPI: \_TZ_.FN03: New power resource
Dec 13 01:56:46.021682 kernel: ACPI: \_TZ_.FN04: New power resource
Dec 13 01:56:46.021687 kernel: ACPI: \PIN_: New power resource
Dec 13 01:56:46.021693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe])
Dec 13 01:56:46.021763 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 01:56:46.021816 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER]
Dec 13 01:56:46.021862 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR]
Dec 13 01:56:46.021871 kernel: PCI host bridge to bus 0000:00
Dec 13 01:56:46.021924 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 01:56:46.021966 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 01:56:46.022007 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 01:56:46.022048 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window]
Dec 13 01:56:46.022088 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window]
Dec 13 01:56:46.022128 kernel: pci_bus 0000:00: root bus resource [bus 00-fe]
Dec 13 01:56:46.022186 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000
Dec 13 01:56:46.022241 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400
Dec 13 01:56:46.022289 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.022340 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000
Dec 13 01:56:46.022386 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit]
Dec 13 01:56:46.022435 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000
Dec 13 01:56:46.022488 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit]
Dec 13 01:56:46.022539 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330
Dec 13 01:56:46.022586 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit]
Dec 13 01:56:46.022632 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold
Dec 13 01:56:46.022682 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000
Dec 13 01:56:46.022729 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit]
Dec 13 01:56:46.022777 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit]
Dec 13 01:56:46.022827 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000
Dec 13 01:56:46.022873 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
Dec 13 01:56:46.022926 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000
Dec 13 01:56:46.022971 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
Dec 13 01:56:46.023021 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000
Dec 13 01:56:46.023068 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit]
Dec 13 01:56:46.023116 kernel: pci 0000:00:16.0: PME# supported from D3hot
Dec 13 01:56:46.023172 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000
Dec 13 01:56:46.023221 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit]
Dec 13 01:56:46.023266 kernel: pci 0000:00:16.1: PME# supported from D3hot
Dec 13 01:56:46.023316 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000
Dec 13 01:56:46.023362 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit]
Dec 13 01:56:46.023411 kernel: pci 0000:00:16.4: PME# supported from D3hot
Dec 13 01:56:46.023461 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601
Dec 13 01:56:46.023533 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff]
Dec 13 01:56:46.023596 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff]
Dec 13 01:56:46.023641 kernel: pci 0000:00:17.0: reg 0x18: [io  0x6050-0x6057]
Dec 13 01:56:46.023687 kernel: pci 0000:00:17.0: reg 0x1c: [io  0x6040-0x6043]
Dec 13 01:56:46.023732 kernel: pci 0000:00:17.0: reg 0x20: [io  0x6020-0x603f]
Dec 13 01:56:46.023782 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff]
Dec 13 01:56:46.023827 kernel: pci 0000:00:17.0: PME# supported from D3hot
Dec 13 01:56:46.023878 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400
Dec 13 01:56:46.023925 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.023981 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400
Dec 13 01:56:46.024028 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024078 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400
Dec 13 01:56:46.024125 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024176 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400
Dec 13 01:56:46.024226 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024276 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400
Dec 13 01:56:46.024323 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024372 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000
Dec 13 01:56:46.024419 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
Dec 13 01:56:46.024468 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100
Dec 13 01:56:46.024566 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500
Dec 13 01:56:46.024615 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit]
Dec 13 01:56:46.024660 kernel: pci 0000:00:1f.4: reg 0x20: [io  0xefa0-0xefbf]
Dec 13 01:56:46.024713 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000
Dec 13 01:56:46.024759 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff]
Dec 13 01:56:46.024812 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000
Dec 13 01:56:46.024860 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.024910 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref]
Dec 13 01:56:46.024957 kernel: pci 0000:01:00.0: PME# supported from D3cold
Dec 13 01:56:46.025005 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref]
Dec 13 01:56:46.025053 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs)
Dec 13 01:56:46.025107 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000
Dec 13 01:56:46.025156 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref]
Dec 13 01:56:46.025202 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref]
Dec 13 01:56:46.025252 kernel: pci 0000:01:00.1: PME# supported from D3cold
Dec 13 01:56:46.025300 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref]
Dec 13 01:56:46.025347 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs)
Dec 13 01:56:46.025394 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Dec 13 01:56:46.025440 kernel: pci 0000:00:01.0:   bridge window [mem 0x95100000-0x952fffff]
Dec 13 01:56:46.025495 kernel: pci 0000:00:01.0:   bridge window [mem 0x90000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.025578 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02]
Dec 13 01:56:46.025630 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect
Dec 13 01:56:46.025680 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000
Dec 13 01:56:46.025728 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff]
Dec 13 01:56:46.025775 kernel: pci 0000:03:00.0: reg 0x18: [io  0x5000-0x501f]
Dec 13 01:56:46.025823 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff]
Dec 13 01:56:46.025871 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.025917 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03]
Dec 13 01:56:46.025964 kernel: pci 0000:00:1b.4:   bridge window [io  0x5000-0x5fff]
Dec 13 01:56:46.026012 kernel: pci 0000:00:1b.4:   bridge window [mem 0x95400000-0x954fffff]
Dec 13 01:56:46.026065 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect
Dec 13 01:56:46.026112 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000
Dec 13 01:56:46.026159 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff]
Dec 13 01:56:46.026207 kernel: pci 0000:04:00.0: reg 0x18: [io  0x4000-0x401f]
Dec 13 01:56:46.026254 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff]
Dec 13 01:56:46.026302 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.026352 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04]
Dec 13 01:56:46.026399 kernel: pci 0000:00:1b.5:   bridge window [io  0x4000-0x4fff]
Dec 13 01:56:46.026444 kernel: pci 0000:00:1b.5:   bridge window [mem 0x95300000-0x953fffff]
Dec 13 01:56:46.026548 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05]
Dec 13 01:56:46.026614 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400
Dec 13 01:56:46.026663 kernel: pci 0000:06:00.0: enabling Extended Tags
Dec 13 01:56:46.026711 kernel: pci 0000:06:00.0: supports D1 D2
Dec 13 01:56:46.026758 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Dec 13 01:56:46.026808 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07]
Dec 13 01:56:46.026855 kernel: pci 0000:00:1c.3:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.026901 kernel: pci 0000:00:1c.3:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.026955 kernel: pci_bus 0000:07: extended config space not accessible
Dec 13 01:56:46.027011 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000
Dec 13 01:56:46.027062 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff]
Dec 13 01:56:46.027114 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff]
Dec 13 01:56:46.027165 kernel: pci 0000:07:00.0: reg 0x18: [io  0x3000-0x307f]
Dec 13 01:56:46.027214 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 01:56:46.027262 kernel: pci 0000:07:00.0: supports D1 D2
Dec 13 01:56:46.027312 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Dec 13 01:56:46.027360 kernel: pci 0000:06:00.0: PCI bridge to [bus 07]
Dec 13 01:56:46.027407 kernel: pci 0000:06:00.0:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.027454 kernel: pci 0000:06:00.0:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.027463 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0
Dec 13 01:56:46.027469 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1
Dec 13 01:56:46.027478 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0
Dec 13 01:56:46.027501 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0
Dec 13 01:56:46.027507 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0
Dec 13 01:56:46.027512 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0
Dec 13 01:56:46.027533 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0
Dec 13 01:56:46.027538 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0
Dec 13 01:56:46.027544 kernel: iommu: Default domain type: Translated
Dec 13 01:56:46.027551 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 01:56:46.027556 kernel: PCI: Using ACPI for IRQ routing
Dec 13 01:56:46.027562 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 01:56:46.027568 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff]
Dec 13 01:56:46.027573 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff]
Dec 13 01:56:46.027579 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff]
Dec 13 01:56:46.027584 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff]
Dec 13 01:56:46.027590 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff]
Dec 13 01:56:46.027595 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff]
Dec 13 01:56:46.027646 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device
Dec 13 01:56:46.027697 kernel: pci 0000:07:00.0: vgaarb: bridge control possible
Dec 13 01:56:46.027745 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 01:56:46.027754 kernel: vgaarb: loaded
Dec 13 01:56:46.027760 kernel: clocksource: Switched to clocksource tsc-early
Dec 13 01:56:46.027766 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 01:56:46.027771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 01:56:46.027777 kernel: pnp: PnP ACPI init
Dec 13 01:56:46.027826 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved
Dec 13 01:56:46.027875 kernel: pnp 00:02: [dma 0 disabled]
Dec 13 01:56:46.027922 kernel: pnp 00:03: [dma 0 disabled]
Dec 13 01:56:46.027971 kernel: system 00:04: [io  0x0680-0x069f] has been reserved
Dec 13 01:56:46.028013 kernel: system 00:04: [io  0x164e-0x164f] has been reserved
Dec 13 01:56:46.028058 kernel: system 00:05: [io  0x1854-0x1857] has been reserved
Dec 13 01:56:46.028103 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved
Dec 13 01:56:46.028149 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved
Dec 13 01:56:46.028190 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved
Dec 13 01:56:46.028233 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved
Dec 13 01:56:46.028278 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved
Dec 13 01:56:46.028321 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved
Dec 13 01:56:46.028363 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved
Dec 13 01:56:46.028407 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved
Dec 13 01:56:46.028455 kernel: system 00:07: [io  0x1800-0x18fe] could not be reserved
Dec 13 01:56:46.028520 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved
Dec 13 01:56:46.028576 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved
Dec 13 01:56:46.028618 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved
Dec 13 01:56:46.028661 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved
Dec 13 01:56:46.028703 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved
Dec 13 01:56:46.028748 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved
Dec 13 01:56:46.028793 kernel: system 00:08: [io  0x2000-0x20fe] has been reserved
Dec 13 01:56:46.028801 kernel: pnp: PnP ACPI: found 10 devices
Dec 13 01:56:46.028807 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 01:56:46.028813 kernel: NET: Registered PF_INET protocol family
Dec 13 01:56:46.028819 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 01:56:46.028825 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear)
Dec 13 01:56:46.028831 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 01:56:46.028836 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 01:56:46.028844 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
Dec 13 01:56:46.028849 kernel: TCP: Hash tables configured (established 262144 bind 65536)
Dec 13 01:56:46.028856 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.028862 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.028867 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 01:56:46.028873 kernel: NET: Registered PF_XDP protocol family
Dec 13 01:56:46.028920 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit]
Dec 13 01:56:46.028967 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit]
Dec 13 01:56:46.029016 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit]
Dec 13 01:56:46.029064 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029112 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029160 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029208 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029255 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Dec 13 01:56:46.029302 kernel: pci 0000:00:01.0:   bridge window [mem 0x95100000-0x952fffff]
Dec 13 01:56:46.029348 kernel: pci 0000:00:01.0:   bridge window [mem 0x90000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.029397 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02]
Dec 13 01:56:46.029443 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03]
Dec 13 01:56:46.029510 kernel: pci 0000:00:1b.4:   bridge window [io  0x5000-0x5fff]
Dec 13 01:56:46.029571 kernel: pci 0000:00:1b.4:   bridge window [mem 0x95400000-0x954fffff]
Dec 13 01:56:46.029616 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04]
Dec 13 01:56:46.029665 kernel: pci 0000:00:1b.5:   bridge window [io  0x4000-0x4fff]
Dec 13 01:56:46.029711 kernel: pci 0000:00:1b.5:   bridge window [mem 0x95300000-0x953fffff]
Dec 13 01:56:46.029757 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05]
Dec 13 01:56:46.029804 kernel: pci 0000:06:00.0: PCI bridge to [bus 07]
Dec 13 01:56:46.029852 kernel: pci 0000:06:00.0:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.029900 kernel: pci 0000:06:00.0:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.029945 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07]
Dec 13 01:56:46.029992 kernel: pci 0000:00:1c.3:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.030037 kernel: pci 0000:00:1c.3:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.030083 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc
Dec 13 01:56:46.030125 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 01:56:46.030167 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 01:56:46.030208 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 01:56:46.030248 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window]
Dec 13 01:56:46.030288 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window]
Dec 13 01:56:46.030335 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff]
Dec 13 01:56:46.030380 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.030429 kernel: pci_bus 0000:03: resource 0 [io  0x5000-0x5fff]
Dec 13 01:56:46.030472 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff]
Dec 13 01:56:46.030554 kernel: pci_bus 0000:04: resource 0 [io  0x4000-0x4fff]
Dec 13 01:56:46.030597 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff]
Dec 13 01:56:46.030643 kernel: pci_bus 0000:06: resource 0 [io  0x3000-0x3fff]
Dec 13 01:56:46.030688 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.030734 kernel: pci_bus 0000:07: resource 0 [io  0x3000-0x3fff]
Dec 13 01:56:46.030777 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.030785 kernel: PCI: CLS 64 bytes, default 64
Dec 13 01:56:46.030791 kernel: DMAR: No ATSR found
Dec 13 01:56:46.030797 kernel: DMAR: No SATC found
Dec 13 01:56:46.030803 kernel: DMAR: dmar0: Using Queued invalidation
Dec 13 01:56:46.030849 kernel: pci 0000:00:00.0: Adding to iommu group 0
Dec 13 01:56:46.030899 kernel: pci 0000:00:01.0: Adding to iommu group 1
Dec 13 01:56:46.030945 kernel: pci 0000:00:08.0: Adding to iommu group 2
Dec 13 01:56:46.030992 kernel: pci 0000:00:12.0: Adding to iommu group 3
Dec 13 01:56:46.031037 kernel: pci 0000:00:14.0: Adding to iommu group 4
Dec 13 01:56:46.031083 kernel: pci 0000:00:14.2: Adding to iommu group 4
Dec 13 01:56:46.031128 kernel: pci 0000:00:15.0: Adding to iommu group 5
Dec 13 01:56:46.031174 kernel: pci 0000:00:15.1: Adding to iommu group 5
Dec 13 01:56:46.031219 kernel: pci 0000:00:16.0: Adding to iommu group 6
Dec 13 01:56:46.031265 kernel: pci 0000:00:16.1: Adding to iommu group 6
Dec 13 01:56:46.031313 kernel: pci 0000:00:16.4: Adding to iommu group 6
Dec 13 01:56:46.031359 kernel: pci 0000:00:17.0: Adding to iommu group 7
Dec 13 01:56:46.031404 kernel: pci 0000:00:1b.0: Adding to iommu group 8
Dec 13 01:56:46.031450 kernel: pci 0000:00:1b.4: Adding to iommu group 9
Dec 13 01:56:46.031499 kernel: pci 0000:00:1b.5: Adding to iommu group 10
Dec 13 01:56:46.031585 kernel: pci 0000:00:1c.0: Adding to iommu group 11
Dec 13 01:56:46.031631 kernel: pci 0000:00:1c.3: Adding to iommu group 12
Dec 13 01:56:46.031676 kernel: pci 0000:00:1e.0: Adding to iommu group 13
Dec 13 01:56:46.031725 kernel: pci 0000:00:1f.0: Adding to iommu group 14
Dec 13 01:56:46.031771 kernel: pci 0000:00:1f.4: Adding to iommu group 14
Dec 13 01:56:46.031818 kernel: pci 0000:00:1f.5: Adding to iommu group 14
Dec 13 01:56:46.031864 kernel: pci 0000:01:00.0: Adding to iommu group 1
Dec 13 01:56:46.031913 kernel: pci 0000:01:00.1: Adding to iommu group 1
Dec 13 01:56:46.031961 kernel: pci 0000:03:00.0: Adding to iommu group 15
Dec 13 01:56:46.032009 kernel: pci 0000:04:00.0: Adding to iommu group 16
Dec 13 01:56:46.032056 kernel: pci 0000:06:00.0: Adding to iommu group 17
Dec 13 01:56:46.032108 kernel: pci 0000:07:00.0: Adding to iommu group 17
Dec 13 01:56:46.032116 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O
Dec 13 01:56:46.032122 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 01:56:46.032128 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB)
Dec 13 01:56:46.032134 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer
Dec 13 01:56:46.032140 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
Dec 13 01:56:46.032145 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules
Dec 13 01:56:46.032151 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules
Dec 13 01:56:46.032200 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found)
Dec 13 01:56:46.032210 kernel: Initialise system trusted keyrings
Dec 13 01:56:46.032216 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0
Dec 13 01:56:46.032221 kernel: Key type asymmetric registered
Dec 13 01:56:46.032227 kernel: Asymmetric key parser 'x509' registered
Dec 13 01:56:46.032232 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Dec 13 01:56:46.032238 kernel: io scheduler mq-deadline registered
Dec 13 01:56:46.032244 kernel: io scheduler kyber registered
Dec 13 01:56:46.032250 kernel: io scheduler bfq registered
Dec 13 01:56:46.032296 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121
Dec 13 01:56:46.032343 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122
Dec 13 01:56:46.032388 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123
Dec 13 01:56:46.032435 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124
Dec 13 01:56:46.032483 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125
Dec 13 01:56:46.032568 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126
Dec 13 01:56:46.032618 kernel: thermal LNXTHERM:00: registered as thermal_zone0
Dec 13 01:56:46.032628 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C)
Dec 13 01:56:46.032634 kernel: ERST: Error Record Serialization Table (ERST) support is initialized.
Dec 13 01:56:46.032639 kernel: pstore: Using crash dump compression: deflate
Dec 13 01:56:46.032645 kernel: pstore: Registered erst as persistent store backend
Dec 13 01:56:46.032651 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 01:56:46.032657 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 01:56:46.032662 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 01:56:46.032668 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Dec 13 01:56:46.032674 kernel: hpet_acpi_add: no address or irqs in _CRS
Dec 13 01:56:46.032725 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16)
Dec 13 01:56:46.032734 kernel: i8042: PNP: No PS/2 controller found.
Dec 13 01:56:46.032775 kernel: rtc_cmos rtc_cmos: RTC can wake from S4
Dec 13 01:56:46.032818 kernel: rtc_cmos rtc_cmos: registered as rtc0
Dec 13 01:56:46.032860 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T01:56:44 UTC (1734055004)
Dec 13 01:56:46.032903 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram
Dec 13 01:56:46.032911 kernel: intel_pstate: Intel P-state driver initializing
Dec 13 01:56:46.032917 kernel: intel_pstate: Disabling energy efficiency optimization
Dec 13 01:56:46.032924 kernel: intel_pstate: HWP enabled
Dec 13 01:56:46.032930 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0
Dec 13 01:56:46.032935 kernel: vesafb: scrolling: redraw
Dec 13 01:56:46.032941 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0
Dec 13 01:56:46.032947 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000a02b91ea, using 768k, total 768k
Dec 13 01:56:46.032953 kernel: Console: switching to colour frame buffer device 128x48
Dec 13 01:56:46.032958 kernel: fb0: VESA VGA frame buffer device
Dec 13 01:56:46.032964 kernel: NET: Registered PF_INET6 protocol family
Dec 13 01:56:46.032970 kernel: Segment Routing with IPv6
Dec 13 01:56:46.032976 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 01:56:46.032982 kernel: NET: Registered PF_PACKET protocol family
Dec 13 01:56:46.032988 kernel: Key type dns_resolver registered
Dec 13 01:56:46.032993 kernel: microcode: Microcode Update Driver: v2.2.
Dec 13 01:56:46.032999 kernel: IPI shorthand broadcast: enabled
Dec 13 01:56:46.033005 kernel: sched_clock: Marking stable (2477000770, 1385644813)->(4406217006, -543571423)
Dec 13 01:56:46.033010 kernel: registered taskstats version 1
Dec 13 01:56:46.033016 kernel: Loading compiled-in X.509 certificates
Dec 13 01:56:46.033022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0'
Dec 13 01:56:46.033029 kernel: Key type .fscrypt registered
Dec 13 01:56:46.033034 kernel: Key type fscrypt-provisioning registered
Dec 13 01:56:46.033040 kernel: ima: Allocated hash algorithm: sha1
Dec 13 01:56:46.033046 kernel: ima: No architecture policies found
Dec 13 01:56:46.033051 kernel: clk: Disabling unused clocks
Dec 13 01:56:46.033057 kernel: Freeing unused kernel image (initmem) memory: 42844K
Dec 13 01:56:46.033063 kernel: Write protecting the kernel read-only data: 36864k
Dec 13 01:56:46.033069 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K
Dec 13 01:56:46.033075 kernel: Run /init as init process
Dec 13 01:56:46.033081 kernel:   with arguments:
Dec 13 01:56:46.033087 kernel:     /init
Dec 13 01:56:46.033092 kernel:   with environment:
Dec 13 01:56:46.033098 kernel:     HOME=/
Dec 13 01:56:46.033103 kernel:     TERM=linux
Dec 13 01:56:46.033109 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 01:56:46.033116 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Dec 13 01:56:46.033124 systemd[1]: Detected architecture x86-64.
Dec 13 01:56:46.033130 systemd[1]: Running in initrd.
Dec 13 01:56:46.033136 systemd[1]: No hostname configured, using default hostname.
Dec 13 01:56:46.033142 systemd[1]: Hostname set to <localhost>.
Dec 13 01:56:46.033148 systemd[1]: Initializing machine ID from random generator.
Dec 13 01:56:46.033154 systemd[1]: Queued start job for default target initrd.target.
Dec 13 01:56:46.033160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:56:46.033166 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:56:46.033173 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Dec 13 01:56:46.033179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Dec 13 01:56:46.033185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Dec 13 01:56:46.033191 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Dec 13 01:56:46.033197 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Dec 13 01:56:46.033204 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz
Dec 13 01:56:46.033209 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns
Dec 13 01:56:46.033216 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Dec 13 01:56:46.033222 kernel: clocksource: Switched to clocksource tsc
Dec 13 01:56:46.033228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:56:46.033234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:56:46.033240 systemd[1]: Reached target paths.target - Path Units.
Dec 13 01:56:46.033246 systemd[1]: Reached target slices.target - Slice Units.
Dec 13 01:56:46.033252 systemd[1]: Reached target swap.target - Swaps.
Dec 13 01:56:46.033258 systemd[1]: Reached target timers.target - Timer Units.
Dec 13 01:56:46.033264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 01:56:46.033271 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 01:56:46.033277 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Dec 13 01:56:46.033283 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Dec 13 01:56:46.033289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:56:46.033295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:56:46.033301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:56:46.033307 systemd[1]: Reached target sockets.target - Socket Units.
Dec 13 01:56:46.033313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Dec 13 01:56:46.033320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Dec 13 01:56:46.033325 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Dec 13 01:56:46.033331 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 01:56:46.033337 systemd[1]: Starting systemd-journald.service - Journal Service...
Dec 13 01:56:46.033353 systemd-journald[267]: Collecting audit messages is disabled.
Dec 13 01:56:46.033368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Dec 13 01:56:46.033375 systemd-journald[267]: Journal started
Dec 13 01:56:46.033388 systemd-journald[267]: Runtime Journal (/run/log/journal/d54a3510d3c047118c82236dccc067d9) is 8.0M, max 639.9M, 631.9M free.
Dec 13 01:56:46.056861 systemd-modules-load[269]: Inserted module 'overlay'
Dec 13 01:56:46.079478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:56:46.107519 systemd[1]: Started systemd-journald.service - Journal Service.
Dec 13 01:56:46.107823 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Dec 13 01:56:46.179724 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 01:56:46.179737 kernel: Bridge firewalling registered
Dec 13 01:56:46.163659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:56:46.169024 systemd-modules-load[269]: Inserted module 'br_netfilter'
Dec 13 01:56:46.191788 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 01:56:46.211861 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:56:46.220837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:56:46.264892 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:56:46.268642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Dec 13 01:56:46.288176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Dec 13 01:56:46.288632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Dec 13 01:56:46.291944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:56:46.293404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Dec 13 01:56:46.294163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Dec 13 01:56:46.295197 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:56:46.296302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 13 01:56:46.300409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:56:46.311809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:56:46.319882 systemd-resolved[302]: Positive Trust Anchors:
Dec 13 01:56:46.319889 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 01:56:46.319922 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Dec 13 01:56:46.322093 systemd-resolved[302]: Defaulting to hostname 'linux'.
Dec 13 01:56:46.322777 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Dec 13 01:56:46.355912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:56:46.371732 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Dec 13 01:56:46.486703 dracut-cmdline[308]: dracut-dracut-053
Dec 13 01:56:46.494780 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 01:56:46.694506 kernel: SCSI subsystem initialized
Dec 13 01:56:46.716521 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 01:56:46.739519 kernel: iscsi: registered transport (tcp)
Dec 13 01:56:46.770426 kernel: iscsi: registered transport (qla4xxx)
Dec 13 01:56:46.770442 kernel: QLogic iSCSI HBA Driver
Dec 13 01:56:46.803711 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Dec 13 01:56:46.822767 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Dec 13 01:56:46.908393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 01:56:46.908422 kernel: device-mapper: uevent: version 1.0.3
Dec 13 01:56:46.928145 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Dec 13 01:56:46.985557 kernel: raid6: avx2x4   gen() 53518 MB/s
Dec 13 01:56:47.017556 kernel: raid6: avx2x2   gen() 54032 MB/s
Dec 13 01:56:47.053971 kernel: raid6: avx2x1   gen() 45310 MB/s
Dec 13 01:56:47.053987 kernel: raid6: using algorithm avx2x2 gen() 54032 MB/s
Dec 13 01:56:47.101002 kernel: raid6: .... xor() 31060 MB/s, rmw enabled
Dec 13 01:56:47.101022 kernel: raid6: using avx2x2 recovery algorithm
Dec 13 01:56:47.142480 kernel: xor: automatically using best checksumming function   avx       
Dec 13 01:56:47.254535 kernel: Btrfs loaded, zoned=no, fsverity=no
Dec 13 01:56:47.259993 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 01:56:47.284790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:56:47.291296 systemd-udevd[492]: Using default interface naming scheme 'v255'.
Dec 13 01:56:47.296727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:56:47.333744 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Dec 13 01:56:47.368997 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation
Dec 13 01:56:47.420427 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 01:56:47.448918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Dec 13 01:56:47.536686 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:56:47.569467 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 01:56:47.569513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 01:56:47.580575 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 01:56:47.580649 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Dec 13 01:56:47.597840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 01:56:47.658696 kernel: ACPI: bus type USB registered
Dec 13 01:56:47.658709 kernel: usbcore: registered new interface driver usbfs
Dec 13 01:56:47.658716 kernel: usbcore: registered new interface driver hub
Dec 13 01:56:47.658724 kernel: usbcore: registered new device driver usb
Dec 13 01:56:47.597945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:56:47.709579 kernel: PTP clock support registered
Dec 13 01:56:47.709602 kernel: libata version 3.00 loaded.
Dec 13 01:56:47.709615 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 01:56:47.709628 kernel: AES CTR mode by8 optimization enabled
Dec 13 01:56:47.709641 kernel: ahci 0000:00:17.0: version 3.0
Dec 13 01:56:48.203090 kernel: igb: Intel(R) Gigabit Ethernet Network Driver
Dec 13 01:56:48.203106 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode
Dec 13 01:56:48.203195 kernel: igb: Copyright (c) 2007-2014 Intel Corporation.
Dec 13 01:56:48.203206 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst 
Dec 13 01:56:48.203281 kernel: pps pps0: new PPS source ptp0
Dec 13 01:56:48.203360 kernel: scsi host0: ahci
Dec 13 01:56:48.203433 kernel: igb 0000:03:00.0: added PHC on eth0
Dec 13 01:56:48.203516 kernel: scsi host1: ahci
Dec 13 01:56:48.203586 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection
Dec 13 01:56:48.203662 kernel: scsi host2: ahci
Dec 13 01:56:48.203732 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Dec 13 01:56:48.203805 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1
Dec 13 01:56:48.203875 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810
Dec 13 01:56:48.203947 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Dec 13 01:56:48.204017 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2
Dec 13 01:56:48.204086 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed
Dec 13 01:56:48.204158 kernel: hub 1-0:1.0: USB hub found
Dec 13 01:56:48.204242 kernel: hub 1-0:1.0: 16 ports detected
Dec 13 01:56:48.204318 kernel: hub 2-0:1.0: USB hub found
Dec 13 01:56:48.204400 kernel: hub 2-0:1.0: 10 ports detected
Dec 13 01:56:48.204479 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d8
Dec 13 01:56:48.204556 kernel: scsi host3: ahci
Dec 13 01:56:48.204624 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000
Dec 13 01:56:48.204699 kernel: scsi host4: ahci
Dec 13 01:56:48.204770 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
Dec 13 01:56:48.204842 kernel: scsi host5: ahci
Dec 13 01:56:48.204912 kernel: pps pps1: new PPS source ptp1
Dec 13 01:56:48.204978 kernel: scsi host6: ahci
Dec 13 01:56:48.205045 kernel: igb 0000:04:00.0: added PHC on eth1
Dec 13 01:56:48.205124 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127
Dec 13 01:56:48.205134 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection
Dec 13 01:56:48.205207 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127
Dec 13 01:56:48.205217 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d9
Dec 13 01:56:48.205289 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127
Dec 13 01:56:48.205299 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000
Dec 13 01:56:48.205370 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127
Dec 13 01:56:48.205380 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
Dec 13 01:56:48.205451 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127
Dec 13 01:56:48.205462 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd
Dec 13 01:56:48.366948 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127
Dec 13 01:56:48.366958 kernel: hub 1-14:1.0: USB hub found
Dec 13 01:56:48.367033 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127
Dec 13 01:56:48.367042 kernel: hub 1-14:1.0: 4 ports detected
Dec 13 01:56:47.709621 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:56:48.417577 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016
Dec 13 01:56:48.878546 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
Dec 13 01:56:48.878628 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878638 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878646 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878653 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Dec 13 01:56:48.878661 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878668 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Dec 13 01:56:48.878679 kernel: ata7: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878688 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT,  D3MU001, max UDMA/133
Dec 13 01:56:48.878695 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384)
Dec 13 01:56:48.878763 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT,  D3MU001, max UDMA/133
Dec 13 01:56:48.878771 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged
Dec 13 01:56:48.878835 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd
Dec 13 01:56:48.878941 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA
Dec 13 01:56:48.878950 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA
Dec 13 01:56:48.878959 kernel: ata1.00: Features: NCQ-prio
Dec 13 01:56:48.878967 kernel: ata2.00: Features: NCQ-prio
Dec 13 01:56:48.878975 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 01:56:48.878982 kernel: ata1.00: configured for UDMA/133
Dec 13 01:56:48.878990 kernel: ata2.00: configured for UDMA/133
Dec 13 01:56:48.878997 kernel: scsi 0:0:0:0: Direct-Access     ATA      Micron_5300_MTFD U001 PQ: 0 ANSI: 5
Dec 13 01:56:48.879071 kernel: scsi 1:0:0:0: Direct-Access     ATA      Micron_5300_MTFD U001 PQ: 0 ANSI: 5
Dec 13 01:56:48.879136 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
Dec 13 01:56:47.728513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:56:48.909575 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016
Dec 13 01:56:49.573597 kernel: igb 0000:03:00.0 eno1: renamed from eth0
Dec 13 01:56:49.573678 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
Dec 13 01:56:49.573745 kernel: usbcore: registered new interface driver usbhid
Dec 13 01:56:49.573758 kernel: usbhid: USB HID core driver
Dec 13 01:56:49.573765 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0
Dec 13 01:56:49.573773 kernel: igb 0000:04:00.0 eno2: renamed from eth1
Dec 13 01:56:49.573837 kernel: ata2.00: Enabling discard_zeroes_data
Dec 13 01:56:49.573845 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0
Dec 13 01:56:49.573918 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.573926 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB)
Dec 13 01:56:49.573990 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB)
Dec 13 01:56:49.574049 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks
Dec 13 01:56:49.574106 kernel: sd 1:0:0:0: [sda] Write Protect is off
Dec 13 01:56:49.574162 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00
Dec 13 01:56:49.574218 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 13 01:56:49.574274 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes
Dec 13 01:56:49.574330 kernel: ata2.00: Enabling discard_zeroes_data
Dec 13 01:56:49.574338 kernel: sd 1:0:0:0: [sda] Attached SCSI disk
Dec 13 01:56:49.574394 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1
Dec 13 01:56:49.574403 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks
Dec 13 01:56:49.574459 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1
Dec 13 01:56:49.574569 kernel: sd 0:0:0:0: [sdb] Write Protect is off
Dec 13 01:56:49.574625 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384)
Dec 13 01:56:49.574685 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00
Dec 13 01:56:49.574741 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged
Dec 13 01:56:49.574803 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 13 01:56:49.574861 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes
Dec 13 01:56:49.574917 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.574925 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 01:56:49.574932 kernel: GPT:9289727 != 937703087
Dec 13 01:56:49.574939 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 01:56:49.574946 kernel: GPT:9289727 != 937703087
Dec 13 01:56:49.574953 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 01:56:49.574961 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:49.574969 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk
Dec 13 01:56:49.575025 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sdb3 scanned by (udev-worker) (698)
Dec 13 01:56:49.575033 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
Dec 13 01:56:49.575092 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (561)
Dec 13 01:56:47.728680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:56:49.687646 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2
Dec 13 01:56:49.687815 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0
Dec 13 01:56:47.788632 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:56:48.398717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:56:48.430680 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Dec 13 01:56:48.441874 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 01:56:48.441909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:56:48.441933 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Dec 13 01:56:48.451632 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Dec 13 01:56:48.480652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:56:48.495883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:56:49.936578 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.936594 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:49.936602 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.936609 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:49.936616 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.936622 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:48.506679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 01:56:49.946554 disk-uuid[725]: Primary Header is updated.
Dec 13 01:56:49.946554 disk-uuid[725]: Secondary Entries is updated.
Dec 13 01:56:49.946554 disk-uuid[725]: Secondary Header is updated.
Dec 13 01:56:48.554209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:56:49.556241 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT.
Dec 13 01:56:49.674108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM.
Dec 13 01:56:49.702901 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A.
Dec 13 01:56:49.721632 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A.
Dec 13 01:56:49.757172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM.
Dec 13 01:56:49.795806 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Dec 13 01:56:50.898330 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:50.918001 disk-uuid[726]: The operation has completed successfully.
Dec 13 01:56:50.926596 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:50.951086 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 01:56:50.951133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Dec 13 01:56:50.988788 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Dec 13 01:56:51.026679 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Dec 13 01:56:51.026742 sh[744]: Success
Dec 13 01:56:51.063868 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Dec 13 01:56:51.082506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Dec 13 01:56:51.086129 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Dec 13 01:56:51.141623 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be
Dec 13 01:56:51.141641 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:56:51.162684 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Dec 13 01:56:51.181376 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Dec 13 01:56:51.198728 kernel: BTRFS info (device dm-0): using free space tree
Dec 13 01:56:51.233554 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Dec 13 01:56:51.234358 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Dec 13 01:56:51.242987 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Dec 13 01:56:51.253530 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Dec 13 01:56:51.278129 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Dec 13 01:56:51.382501 kernel: BTRFS info (device sdb6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:56:51.382534 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:56:51.382557 kernel: BTRFS info (device sdb6): using free space tree
Dec 13 01:56:51.382564 kernel: BTRFS info (device sdb6): enabling ssd optimizations
Dec 13 01:56:51.382571 kernel: BTRFS info (device sdb6): auto enabling async discard
Dec 13 01:56:51.375340 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 01:56:51.405540 kernel: BTRFS info (device sdb6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:56:51.411834 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Dec 13 01:56:51.436690 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Dec 13 01:56:51.457992 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 01:56:51.464693 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Dec 13 01:56:51.496801 ignition[838]: Ignition 2.19.0
Dec 13 01:56:51.496806 ignition[838]: Stage: fetch-offline
Dec 13 01:56:51.498863 unknown[838]: fetched base config from "system"
Dec 13 01:56:51.496829 ignition[838]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:56:51.498867 unknown[838]: fetched user config from "system"
Dec 13 01:56:51.496834 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:56:51.507794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 01:56:51.496889 ignition[838]: parsed url from cmdline: ""
Dec 13 01:56:51.519414 systemd-networkd[929]: lo: Link UP
Dec 13 01:56:51.496891 ignition[838]: no config URL provided
Dec 13 01:56:51.519417 systemd-networkd[929]: lo: Gained carrier
Dec 13 01:56:51.496893 ignition[838]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 01:56:51.521870 systemd-networkd[929]: Enumeration completed
Dec 13 01:56:51.496916 ignition[838]: parsing config with SHA512: 10589354275b781bfbc981a54b8b8eaf3106eb086182c96cbcf75e43b20c64d080333f9f181c8ed91f46a9fd849b55eecafa9e8a821fc5621366de0ab188b7a3
Dec 13 01:56:51.522622 systemd-networkd[929]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.499084 ignition[838]: fetch-offline: fetch-offline passed
Dec 13 01:56:51.524658 systemd[1]: Started systemd-networkd.service - Network Configuration.
Dec 13 01:56:51.499086 ignition[838]: POST message to Packet Timeline
Dec 13 01:56:51.532002 systemd[1]: Reached target network.target - Network.
Dec 13 01:56:51.499089 ignition[838]: POST Status error: resource requires networking
Dec 13 01:56:51.547742 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Dec 13 01:56:51.499127 ignition[838]: Ignition finished successfully
Dec 13 01:56:51.550815 systemd-networkd[929]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.572183 ignition[942]: Ignition 2.19.0
Dec 13 01:56:51.559740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Dec 13 01:56:51.572189 ignition[942]: Stage: kargs
Dec 13 01:56:51.579285 systemd-networkd[929]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.572341 ignition[942]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:56:51.792587 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up
Dec 13 01:56:51.791885 systemd-networkd[929]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.572350 ignition[942]: no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:56:51.573106 ignition[942]: kargs: kargs passed
Dec 13 01:56:51.573109 ignition[942]: POST message to Packet Timeline
Dec 13 01:56:51.573121 ignition[942]: GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:56:51.573691 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42266->[::1]:53: read: connection refused
Dec 13 01:56:51.773806 ignition[942]: GET https://metadata.packet.net/metadata: attempt #2
Dec 13 01:56:51.774267 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33728->[::1]:53: read: connection refused
Dec 13 01:56:52.034606 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up
Dec 13 01:56:52.035848 systemd-networkd[929]: eno1: Link UP
Dec 13 01:56:52.036082 systemd-networkd[929]: eno2: Link UP
Dec 13 01:56:52.036227 systemd-networkd[929]: enp1s0f0np0: Link UP
Dec 13 01:56:52.036404 systemd-networkd[929]: enp1s0f0np0: Gained carrier
Dec 13 01:56:52.045650 systemd-networkd[929]: enp1s0f1np1: Link UP
Dec 13 01:56:52.063627 systemd-networkd[929]: enp1s0f0np0: DHCPv4 address 147.28.180.91/31, gateway 147.28.180.90 acquired from 145.40.83.140
Dec 13 01:56:52.174455 ignition[942]: GET https://metadata.packet.net/metadata: attempt #3
Dec 13 01:56:52.175588 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34831->[::1]:53: read: connection refused
Dec 13 01:56:52.817243 systemd-networkd[929]: enp1s0f1np1: Gained carrier
Dec 13 01:56:52.976121 ignition[942]: GET https://metadata.packet.net/metadata: attempt #4
Dec 13 01:56:52.977172 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58483->[::1]:53: read: connection refused
Dec 13 01:56:53.073083 systemd-networkd[929]: enp1s0f0np0: Gained IPv6LL
Dec 13 01:56:53.969076 systemd-networkd[929]: enp1s0f1np1: Gained IPv6LL
Dec 13 01:56:54.578581 ignition[942]: GET https://metadata.packet.net/metadata: attempt #5
Dec 13 01:56:54.579761 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48076->[::1]:53: read: connection refused
Dec 13 01:56:57.782145 ignition[942]: GET https://metadata.packet.net/metadata: attempt #6
Dec 13 01:56:58.559401 ignition[942]: GET result: OK
Dec 13 01:56:58.876161 ignition[942]: Ignition finished successfully
Dec 13 01:56:58.880638 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Dec 13 01:56:58.908765 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Dec 13 01:56:58.914976 ignition[962]: Ignition 2.19.0
Dec 13 01:56:58.914981 ignition[962]: Stage: disks
Dec 13 01:56:58.915096 ignition[962]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:56:58.915104 ignition[962]: no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:56:58.915645 ignition[962]: disks: disks passed
Dec 13 01:56:58.915648 ignition[962]: POST message to Packet Timeline
Dec 13 01:56:58.915657 ignition[962]: GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:56:59.443423 ignition[962]: GET result: OK
Dec 13 01:56:59.859457 ignition[962]: Ignition finished successfully
Dec 13 01:56:59.862863 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Dec 13 01:56:59.877836 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Dec 13 01:56:59.896758 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Dec 13 01:56:59.917740 systemd[1]: Reached target local-fs.target - Local File Systems.
Dec 13 01:56:59.939880 systemd[1]: Reached target sysinit.target - System Initialization.
Dec 13 01:56:59.959786 systemd[1]: Reached target basic.target - Basic System.
Dec 13 01:56:59.992766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Dec 13 01:57:00.025818 systemd-fsck[983]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Dec 13 01:57:00.035974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Dec 13 01:57:00.057699 systemd[1]: Mounting sysroot.mount - /sysroot...
Dec 13 01:57:00.154264 systemd[1]: Mounted sysroot.mount - /sysroot.
Dec 13 01:57:00.169728 kernel: EXT4-fs (sdb9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none.
Dec 13 01:57:00.162927 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Dec 13 01:57:00.185631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 01:57:00.189438 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Dec 13 01:57:00.309784 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (992)
Dec 13 01:57:00.309798 kernel: BTRFS info (device sdb6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:57:00.309807 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:57:00.309814 kernel: BTRFS info (device sdb6): using free space tree
Dec 13 01:57:00.309821 kernel: BTRFS info (device sdb6): enabling ssd optimizations
Dec 13 01:57:00.309831 kernel: BTRFS info (device sdb6): auto enabling async discard
Dec 13 01:57:00.293586 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Dec 13 01:57:00.327802 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent...
Dec 13 01:57:00.350554 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 01:57:00.350574 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 01:57:00.379781 coreos-metadata[994]: Dec 13 01:57:00.366 INFO Fetching https://metadata.packet.net/metadata: Attempt #1
Dec 13 01:57:00.411704 coreos-metadata[1010]: Dec 13 01:57:00.365 INFO Fetching https://metadata.packet.net/metadata: Attempt #1
Dec 13 01:57:00.362480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 01:57:00.397806 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Dec 13 01:57:00.441827 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Dec 13 01:57:00.469961 initrd-setup-root[1024]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 01:57:00.480608 initrd-setup-root[1031]: cut: /sysroot/etc/group: No such file or directory
Dec 13 01:57:00.491632 initrd-setup-root[1038]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 01:57:00.502611 initrd-setup-root[1045]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 01:57:00.527227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Dec 13 01:57:00.552675 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Dec 13 01:57:00.571656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Dec 13 01:57:00.605676 kernel: BTRFS info (device sdb6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:57:00.598263 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Dec 13 01:57:00.619330 ignition[1112]: INFO     : Ignition 2.19.0
Dec 13 01:57:00.619330 ignition[1112]: INFO     : Stage: mount
Dec 13 01:57:00.627588 ignition[1112]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:57:00.627588 ignition[1112]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:57:00.627588 ignition[1112]: INFO     : mount: mount passed
Dec 13 01:57:00.627588 ignition[1112]: INFO     : POST message to Packet Timeline
Dec 13 01:57:00.627588 ignition[1112]: INFO     : GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:57:00.622524 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Dec 13 01:57:01.026558 coreos-metadata[1010]: Dec 13 01:57:01.026 INFO Fetch successful
Dec 13 01:57:01.102874 systemd[1]: flatcar-static-network.service: Deactivated successfully.
Dec 13 01:57:01.102937 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent.
Dec 13 01:57:01.135554 coreos-metadata[994]: Dec 13 01:57:01.133 INFO Fetch successful
Dec 13 01:57:01.165344 coreos-metadata[994]: Dec 13 01:57:01.165 INFO wrote hostname ci-4081.2.1-a-5a9deb00aa to /sysroot/etc/hostname
Dec 13 01:57:01.166824 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Dec 13 01:57:01.231859 ignition[1112]: INFO     : GET result: OK
Dec 13 01:57:01.636537 ignition[1112]: INFO     : Ignition finished successfully
Dec 13 01:57:01.639367 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Dec 13 01:57:01.672749 systemd[1]: Starting ignition-files.service - Ignition (files)...
Dec 13 01:57:01.682754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 01:57:01.742170 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1134)
Dec 13 01:57:01.742188 kernel: BTRFS info (device sdb6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:57:01.761192 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:57:01.777980 kernel: BTRFS info (device sdb6): using free space tree
Dec 13 01:57:01.814292 kernel: BTRFS info (device sdb6): enabling ssd optimizations
Dec 13 01:57:01.814311 kernel: BTRFS info (device sdb6): auto enabling async discard
Dec 13 01:57:01.826802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 01:57:01.854045 ignition[1151]: INFO     : Ignition 2.19.0
Dec 13 01:57:01.854045 ignition[1151]: INFO     : Stage: files
Dec 13 01:57:01.868755 ignition[1151]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:57:01.868755 ignition[1151]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 01:57:01.868755 ignition[1151]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Dec 13 01:57:01.858232 unknown[1151]: wrote ssh authorized keys file for user: core
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.255834 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Dec 13 01:57:02.344423 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Dec 13 01:57:02.622968 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.622968 ignition[1151]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(d): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(d): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: createResultFile: createFiles: op(e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: files passed
Dec 13 01:57:02.653804 ignition[1151]: INFO     : POST message to Packet Timeline
Dec 13 01:57:02.653804 ignition[1151]: INFO     : GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:57:03.262760 ignition[1151]: INFO     : GET result: OK
Dec 13 01:57:03.578117 ignition[1151]: INFO     : Ignition finished successfully
Dec 13 01:57:03.579767 systemd[1]: Finished ignition-files.service - Ignition (files).
Dec 13 01:57:03.615736 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Dec 13 01:57:03.616239 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Dec 13 01:57:03.644979 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 01:57:03.645055 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Dec 13 01:57:03.696760 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:57:03.696760 initrd-setup-root-after-ignition[1189]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:57:03.667029 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 01:57:03.755683 initrd-setup-root-after-ignition[1193]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:57:03.687785 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Dec 13 01:57:03.721733 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Dec 13 01:57:03.773910 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 01:57:03.774049 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Dec 13 01:57:03.792823 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Dec 13 01:57:03.812753 systemd[1]: Reached target initrd.target - Initrd Default Target.
Dec 13 01:57:03.830867 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Dec 13 01:57:03.844868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Dec 13 01:57:03.939542 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 01:57:03.957920 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Dec 13 01:57:04.008590 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:57:04.020102 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:57:04.042165 systemd[1]: Stopped target timers.target - Timer Units.
Dec 13 01:57:04.060092 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 01:57:04.060522 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 01:57:04.089201 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Dec 13 01:57:04.111096 systemd[1]: Stopped target basic.target - Basic System.
Dec 13 01:57:04.129203 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Dec 13 01:57:04.148099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 01:57:04.170101 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Dec 13 01:57:04.192109 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Dec 13 01:57:04.212103 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 01:57:04.233135 systemd[1]: Stopped target sysinit.target - System Initialization.
Dec 13 01:57:04.254123 systemd[1]: Stopped target local-fs.target - Local File Systems.
Dec 13 01:57:04.274084 systemd[1]: Stopped target swap.target - Swaps.
Dec 13 01:57:04.292076 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 01:57:04.292496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 01:57:04.326952 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:57:04.337120 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:57:04.358064 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Dec 13 01:57:04.358528 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:57:04.381084 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 01:57:04.381505 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Dec 13 01:57:04.413069 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 01:57:04.413570 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 01:57:04.434296 systemd[1]: Stopped target paths.target - Path Units.
Dec 13 01:57:04.451963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 01:57:04.452444 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:57:04.473108 systemd[1]: Stopped target slices.target - Slice Units.
Dec 13 01:57:04.491102 systemd[1]: Stopped target sockets.target - Socket Units.
Dec 13 01:57:04.509073 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 01:57:04.509371 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 01:57:04.529129 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 01:57:04.529427 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 01:57:04.552167 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 01:57:04.667671 ignition[1214]: INFO     : Ignition 2.19.0
Dec 13 01:57:04.667671 ignition[1214]: INFO     : Stage: umount
Dec 13 01:57:04.667671 ignition[1214]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:57:04.667671 ignition[1214]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:57:04.667671 ignition[1214]: INFO     : umount: umount passed
Dec 13 01:57:04.667671 ignition[1214]: INFO     : POST message to Packet Timeline
Dec 13 01:57:04.667671 ignition[1214]: INFO     : GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:57:04.552606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 01:57:04.573173 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 01:57:04.573583 systemd[1]: Stopped ignition-files.service - Ignition (files).
Dec 13 01:57:04.592121 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Dec 13 01:57:04.592537 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Dec 13 01:57:04.626664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Dec 13 01:57:04.629609 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 01:57:04.629692 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:57:04.662666 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Dec 13 01:57:04.675592 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 01:57:04.675773 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:57:04.682813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 01:57:04.682875 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 01:57:04.730676 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 01:57:04.733007 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 01:57:04.733139 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Dec 13 01:57:04.826924 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 01:57:04.827015 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Dec 13 01:57:05.244426 ignition[1214]: INFO     : GET result: OK
Dec 13 01:57:05.616443 ignition[1214]: INFO     : Ignition finished successfully
Dec 13 01:57:05.619628 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 01:57:05.619918 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Dec 13 01:57:05.636952 systemd[1]: Stopped target network.target - Network.
Dec 13 01:57:05.653761 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 01:57:05.653963 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Dec 13 01:57:05.673871 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 01:57:05.674032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Dec 13 01:57:05.691994 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 01:57:05.692150 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Dec 13 01:57:05.710974 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Dec 13 01:57:05.711142 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Dec 13 01:57:05.729972 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 01:57:05.730145 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Dec 13 01:57:05.749246 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Dec 13 01:57:05.760623 systemd-networkd[929]: enp1s0f0np0: DHCPv6 lease lost
Dec 13 01:57:05.766964 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Dec 13 01:57:05.769681 systemd-networkd[929]: enp1s0f1np1: DHCPv6 lease lost
Dec 13 01:57:05.785543 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 01:57:05.785819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Dec 13 01:57:05.805781 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 01:57:05.806176 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Dec 13 01:57:05.827158 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 01:57:05.827278 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:57:05.859549 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Dec 13 01:57:05.876662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 01:57:05.876702 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 01:57:05.898855 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 01:57:05.898928 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:57:05.916865 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 01:57:05.916979 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:57:05.937963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 01:57:05.938127 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:57:05.957189 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:57:05.978889 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 01:57:05.979343 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:57:06.019016 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 01:57:06.019051 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:57:06.047709 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 01:57:06.047745 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:57:06.067667 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 01:57:06.067758 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 01:57:06.100041 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 01:57:06.100202 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Dec 13 01:57:06.128930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 01:57:06.129088 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:57:06.177725 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Dec 13 01:57:06.407727 systemd-journald[267]: Received SIGTERM from PID 1 (systemd).
Dec 13 01:57:06.220554 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 01:57:06.220598 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:57:06.241727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:57:06.241792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:57:06.264095 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 01:57:06.264338 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Dec 13 01:57:06.285332 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 01:57:06.285600 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Dec 13 01:57:06.307437 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Dec 13 01:57:06.335725 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Dec 13 01:57:06.354614 systemd[1]: Switching root.
Dec 13 01:57:06.511584 systemd-journald[267]: Journal stopped
Dec 13 01:56:46.019961 kernel: microcode: updated early: 0xf4 -> 0xfc, date = 2023-07-27
Dec 13 01:56:46.019974 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024
Dec 13 01:56:46.019980 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 01:56:46.019986 kernel: BIOS-provided physical RAM map:
Dec 13 01:56:46.019990 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable
Dec 13 01:56:46.019994 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved
Dec 13 01:56:46.019998 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
Dec 13 01:56:46.020002 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable
Dec 13 01:56:46.020006 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved
Dec 13 01:56:46.020010 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable
Dec 13 01:56:46.020014 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS
Dec 13 01:56:46.020019 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved
Dec 13 01:56:46.020023 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable
Dec 13 01:56:46.020027 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved
Dec 13 01:56:46.020032 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable
Dec 13 01:56:46.020037 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS
Dec 13 01:56:46.020043 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved
Dec 13 01:56:46.020047 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable
Dec 13 01:56:46.020052 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved
Dec 13 01:56:46.020056 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Dec 13 01:56:46.020061 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
Dec 13 01:56:46.020065 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
Dec 13 01:56:46.020070 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Dec 13 01:56:46.020074 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
Dec 13 01:56:46.020079 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable
Dec 13 01:56:46.020083 kernel: NX (Execute Disable) protection: active
Dec 13 01:56:46.020088 kernel: APIC: Static calls initialized
Dec 13 01:56:46.020093 kernel: SMBIOS 3.2.1 present.
Dec 13 01:56:46.020098 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022
Dec 13 01:56:46.020103 kernel: tsc: Detected 3400.000 MHz processor
Dec 13 01:56:46.020108 kernel: tsc: Detected 3399.906 MHz TSC
Dec 13 01:56:46.020112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 01:56:46.020118 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 01:56:46.020122 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000
Dec 13 01:56:46.020127 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs
Dec 13 01:56:46.020132 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 01:56:46.020136 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000
Dec 13 01:56:46.020142 kernel: Using GB pages for direct mapping
Dec 13 01:56:46.020147 kernel: ACPI: Early table checksum verification disabled
Dec 13 01:56:46.020152 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM)
Dec 13 01:56:46.020159 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM   01072009 AMI  00010013)
Dec 13 01:56:46.020164 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06                 01072009 AMI  00010013)
Dec 13 01:56:46.020169 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527)
Dec 13 01:56:46.020174 kernel: ACPI: FACS 0x000000008C66CF80 000040
Dec 13 01:56:46.020180 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04                 01072009 AMI  00010013)
Dec 13 01:56:46.020185 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01                 01072009 AMI  00010013)
Dec 13 01:56:46.020190 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI  00010013)
Dec 13 01:56:46.020194 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097)
Dec 13 01:56:46.020199 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000)
Dec 13 01:56:46.020204 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt  00003000 INTL 20160527)
Dec 13 01:56:46.020209 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt   00003000 INTL 20160527)
Dec 13 01:56:46.020215 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt  00001000 INTL 20160527)
Dec 13 01:56:46.020220 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020225 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527)
Dec 13 01:56:46.020230 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL  xh_mossb 00000000 INTL 20160527)
Dec 13 01:56:46.020235 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020240 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020245 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527)
Dec 13 01:56:46.020249 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527)
Dec 13 01:56:46.020254 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020260 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002      01000013)
Dec 13 01:56:46.020265 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527)
Dec 13 01:56:46.020270 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL  EDK2     00000002      01000013)
Dec 13 01:56:46.020275 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel  ADebTabl 00001000 INTL 20160527)
Dec 13 01:56:46.020280 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI  00000000)
Dec 13 01:56:46.020285 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL  SpsNm    00000002 INTL 20160527)
Dec 13 01:56:46.020290 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM          01072009 AMI  00010013)
Dec 13 01:56:46.020295 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI    AMI.EINJ 00000000 AMI. 00000000)
Dec 13 01:56:46.020301 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER  AMI.ERST 00000000 AMI. 00000000)
Dec 13 01:56:46.020306 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI    AMI.BERT 00000000 AMI. 00000000)
Dec 13 01:56:46.020311 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI    AMI.HEST 00000000 AMI. 00000000)
Dec 13 01:56:46.020315 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN   00000000 INTL 20181221)
Dec 13 01:56:46.020320 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783]
Dec 13 01:56:46.020325 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b]
Dec 13 01:56:46.020330 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf]
Dec 13 01:56:46.020335 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3]
Dec 13 01:56:46.020340 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb]
Dec 13 01:56:46.020346 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b]
Dec 13 01:56:46.020351 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db]
Dec 13 01:56:46.020356 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20]
Dec 13 01:56:46.020361 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543]
Dec 13 01:56:46.020366 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d]
Dec 13 01:56:46.020370 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a]
Dec 13 01:56:46.020375 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77]
Dec 13 01:56:46.020380 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25]
Dec 13 01:56:46.020385 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b]
Dec 13 01:56:46.020391 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361]
Dec 13 01:56:46.020396 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb]
Dec 13 01:56:46.020401 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd]
Dec 13 01:56:46.020406 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1]
Dec 13 01:56:46.020411 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb]
Dec 13 01:56:46.020416 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153]
Dec 13 01:56:46.020420 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe]
Dec 13 01:56:46.020425 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f]
Dec 13 01:56:46.020430 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73]
Dec 13 01:56:46.020435 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab]
Dec 13 01:56:46.020441 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e]
Dec 13 01:56:46.020446 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67]
Dec 13 01:56:46.020451 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97]
Dec 13 01:56:46.020456 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7]
Dec 13 01:56:46.020460 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7]
Dec 13 01:56:46.020465 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273]
Dec 13 01:56:46.020470 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9]
Dec 13 01:56:46.020478 kernel: No NUMA configuration found
Dec 13 01:56:46.020483 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff]
Dec 13 01:56:46.020489 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff]
Dec 13 01:56:46.020494 kernel: Zone ranges:
Dec 13 01:56:46.020499 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 01:56:46.020504 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 01:56:46.020509 kernel:   Normal   [mem 0x0000000100000000-0x000000086effffff]
Dec 13 01:56:46.020514 kernel: Movable zone start for each node
Dec 13 01:56:46.020519 kernel: Early memory node ranges
Dec 13 01:56:46.020524 kernel:   node   0: [mem 0x0000000000001000-0x0000000000098fff]
Dec 13 01:56:46.020529 kernel:   node   0: [mem 0x0000000000100000-0x000000003fffffff]
Dec 13 01:56:46.020535 kernel:   node   0: [mem 0x0000000040400000-0x0000000081b25fff]
Dec 13 01:56:46.020540 kernel:   node   0: [mem 0x0000000081b28000-0x000000008afccfff]
Dec 13 01:56:46.020545 kernel:   node   0: [mem 0x000000008c0b2000-0x000000008c23afff]
Dec 13 01:56:46.020550 kernel:   node   0: [mem 0x000000008eeff000-0x000000008eefffff]
Dec 13 01:56:46.020558 kernel:   node   0: [mem 0x0000000100000000-0x000000086effffff]
Dec 13 01:56:46.020564 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff]
Dec 13 01:56:46.020569 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 01:56:46.020575 kernel: On node 0, zone DMA: 103 pages in unavailable ranges
Dec 13 01:56:46.020581 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges
Dec 13 01:56:46.020586 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges
Dec 13 01:56:46.020592 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges
Dec 13 01:56:46.020597 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges
Dec 13 01:56:46.020602 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges
Dec 13 01:56:46.020608 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges
Dec 13 01:56:46.020613 kernel: ACPI: PM-Timer IO Port: 0x1808
Dec 13 01:56:46.020618 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
Dec 13 01:56:46.020624 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
Dec 13 01:56:46.020630 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
Dec 13 01:56:46.020635 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
Dec 13 01:56:46.020640 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
Dec 13 01:56:46.020646 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
Dec 13 01:56:46.020651 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
Dec 13 01:56:46.020656 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1])
Dec 13 01:56:46.020661 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1])
Dec 13 01:56:46.020667 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1])
Dec 13 01:56:46.020672 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1])
Dec 13 01:56:46.020677 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1])
Dec 13 01:56:46.020683 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1])
Dec 13 01:56:46.020688 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1])
Dec 13 01:56:46.020694 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1])
Dec 13 01:56:46.020699 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1])
Dec 13 01:56:46.020704 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
Dec 13 01:56:46.020710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 01:56:46.020715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 01:56:46.020720 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 01:56:46.020726 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Dec 13 01:56:46.020732 kernel: TSC deadline timer available
Dec 13 01:56:46.020737 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs
Dec 13 01:56:46.020743 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices
Dec 13 01:56:46.020748 kernel: Booting paravirtualized kernel on bare hardware
Dec 13 01:56:46.020754 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 01:56:46.020759 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1
Dec 13 01:56:46.020764 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144
Dec 13 01:56:46.020770 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152
Dec 13 01:56:46.020775 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 
Dec 13 01:56:46.020781 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 01:56:46.020787 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 01:56:46.020792 kernel: random: crng init done
Dec 13 01:56:46.020798 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
Dec 13 01:56:46.020803 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Dec 13 01:56:46.020808 kernel: Fallback order for Node 0: 0 
Dec 13 01:56:46.020813 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 8232415
Dec 13 01:56:46.020819 kernel: Policy zone: Normal
Dec 13 01:56:46.020825 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 01:56:46.020830 kernel: software IO TLB: area num 16.
Dec 13 01:56:46.020836 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 732408K reserved, 0K cma-reserved)
Dec 13 01:56:46.020841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1
Dec 13 01:56:46.020847 kernel: ftrace: allocating 37902 entries in 149 pages
Dec 13 01:56:46.020852 kernel: ftrace: allocated 149 pages with 4 groups
Dec 13 01:56:46.020857 kernel: Dynamic Preempt: voluntary
Dec 13 01:56:46.020863 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 01:56:46.020868 kernel: rcu:         RCU event tracing is enabled.
Dec 13 01:56:46.020875 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16.
Dec 13 01:56:46.020880 kernel:         Trampoline variant of Tasks RCU enabled.
Dec 13 01:56:46.020885 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 01:56:46.020891 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 01:56:46.020896 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 01:56:46.020901 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16
Dec 13 01:56:46.020907 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16
Dec 13 01:56:46.020912 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 01:56:46.020917 kernel: Console: colour dummy device 80x25
Dec 13 01:56:46.020923 kernel: printk: console [tty0] enabled
Dec 13 01:56:46.020929 kernel: printk: console [ttyS1] enabled
Dec 13 01:56:46.020934 kernel: ACPI: Core revision 20230628
Dec 13 01:56:46.020939 kernel: hpet: HPET dysfunctional in PC10. Force disabled.
Dec 13 01:56:46.020945 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 01:56:46.020950 kernel: DMAR: Host address width 39
Dec 13 01:56:46.020955 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1
Dec 13 01:56:46.020961 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
Dec 13 01:56:46.020966 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff
Dec 13 01:56:46.020972 kernel: DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 0
Dec 13 01:56:46.020977 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000
Dec 13 01:56:46.020983 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
Dec 13 01:56:46.020988 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode
Dec 13 01:56:46.020993 kernel: x2apic enabled
Dec 13 01:56:46.020999 kernel: APIC: Switched APIC routing to: cluster x2apic
Dec 13 01:56:46.021004 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns
Dec 13 01:56:46.021010 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906)
Dec 13 01:56:46.021015 kernel: CPU0: Thermal monitoring enabled (TM1)
Dec 13 01:56:46.021021 kernel: process: using mwait in idle threads
Dec 13 01:56:46.021026 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Dec 13 01:56:46.021032 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Dec 13 01:56:46.021037 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 01:56:46.021042 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit
Dec 13 01:56:46.021047 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall
Dec 13 01:56:46.021053 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS
Dec 13 01:56:46.021058 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 01:56:46.021063 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT
Dec 13 01:56:46.021068 kernel: RETBleed: Mitigation: Enhanced IBRS
Dec 13 01:56:46.021074 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 01:56:46.021080 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 01:56:46.021085 kernel: TAA: Mitigation: TSX disabled
Dec 13 01:56:46.021090 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers
Dec 13 01:56:46.021096 kernel: SRBDS: Mitigation: Microcode
Dec 13 01:56:46.021101 kernel: GDS: Mitigation: Microcode
Dec 13 01:56:46.021106 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 01:56:46.021111 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 01:56:46.021117 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 01:56:46.021122 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Dec 13 01:56:46.021127 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Dec 13 01:56:46.021132 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 01:56:46.021139 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Dec 13 01:56:46.021144 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Dec 13 01:56:46.021149 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format.
Dec 13 01:56:46.021154 kernel: Freeing SMP alternatives memory: 32K
Dec 13 01:56:46.021160 kernel: pid_max: default: 32768 minimum: 301
Dec 13 01:56:46.021165 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Dec 13 01:56:46.021170 kernel: landlock: Up and running.
Dec 13 01:56:46.021175 kernel: SELinux:  Initializing.
Dec 13 01:56:46.021181 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.021186 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.021191 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd)
Dec 13 01:56:46.021197 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16.
Dec 13 01:56:46.021203 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16.
Dec 13 01:56:46.021208 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16.
Dec 13 01:56:46.021214 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver.
Dec 13 01:56:46.021219 kernel: ... version:                4
Dec 13 01:56:46.021224 kernel: ... bit width:              48
Dec 13 01:56:46.021230 kernel: ... generic registers:      4
Dec 13 01:56:46.021235 kernel: ... value mask:             0000ffffffffffff
Dec 13 01:56:46.021240 kernel: ... max period:             00007fffffffffff
Dec 13 01:56:46.021246 kernel: ... fixed-purpose events:   3
Dec 13 01:56:46.021252 kernel: ... event mask:             000000070000000f
Dec 13 01:56:46.021257 kernel: signal: max sigframe size: 2032
Dec 13 01:56:46.021262 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445
Dec 13 01:56:46.021268 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 01:56:46.021273 kernel: rcu:         Max phase no-delay instances is 400.
Dec 13 01:56:46.021278 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
Dec 13 01:56:46.021284 kernel: smp: Bringing up secondary CPUs ...
Dec 13 01:56:46.021289 kernel: smpboot: x86: Booting SMP configuration:
Dec 13 01:56:46.021295 kernel: .... node  #0, CPUs:        #1  #2  #3  #4  #5  #6  #7  #8  #9 #10 #11 #12 #13 #14 #15
Dec 13 01:56:46.021301 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Dec 13 01:56:46.021306 kernel: smp: Brought up 1 node, 16 CPUs
Dec 13 01:56:46.021311 kernel: smpboot: Max logical packages: 1
Dec 13 01:56:46.021317 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS)
Dec 13 01:56:46.021322 kernel: devtmpfs: initialized
Dec 13 01:56:46.021327 kernel: x86/mm: Memory block size: 128MB
Dec 13 01:56:46.021333 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes)
Dec 13 01:56:46.021338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes)
Dec 13 01:56:46.021344 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 01:56:46.021350 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear)
Dec 13 01:56:46.021355 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 01:56:46.021361 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 01:56:46.021366 kernel: audit: initializing netlink subsys (disabled)
Dec 13 01:56:46.021371 kernel: audit: type=2000 audit(1734055000.039:1): state=initialized audit_enabled=0 res=1
Dec 13 01:56:46.021376 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 01:56:46.021382 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 01:56:46.021387 kernel: cpuidle: using governor menu
Dec 13 01:56:46.021393 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 01:56:46.021398 kernel: dca service started, version 1.12.1
Dec 13 01:56:46.021404 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
Dec 13 01:56:46.021409 kernel: PCI: Using configuration type 1 for base access
Dec 13 01:56:46.021415 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
Dec 13 01:56:46.021420 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 01:56:46.021425 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 01:56:46.021431 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 01:56:46.021436 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 01:56:46.021442 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 01:56:46.021447 kernel: ACPI: Added _OSI(Module Device)
Dec 13 01:56:46.021453 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 01:56:46.021458 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 01:56:46.021463 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 01:56:46.021469 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded
Dec 13 01:56:46.021476 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021481 kernel: ACPI: SSDT 0xFFFF9F9581EC0400 000400 (v02 PmRef  Cpu0Cst  00003001 INTL 20160527)
Dec 13 01:56:46.021505 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021526 kernel: ACPI: SSDT 0xFFFF9F9581EBF800 000683 (v02 PmRef  Cpu0Ist  00003000 INTL 20160527)
Dec 13 01:56:46.021531 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021536 kernel: ACPI: SSDT 0xFFFF9F9581569100 0000F4 (v02 PmRef  Cpu0Psd  00003000 INTL 20160527)
Dec 13 01:56:46.021541 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021547 kernel: ACPI: SSDT 0xFFFF9F9581EB8800 0005FC (v02 PmRef  ApIst    00003000 INTL 20160527)
Dec 13 01:56:46.021552 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021557 kernel: ACPI: SSDT 0xFFFF9F9581ECA000 000AB0 (v02 PmRef  ApPsd    00003000 INTL 20160527)
Dec 13 01:56:46.021562 kernel: ACPI: Dynamic OEM Table Load:
Dec 13 01:56:46.021568 kernel: ACPI: SSDT 0xFFFF9F9581EC7400 00030A (v02 PmRef  ApCst    00003000 INTL 20160527)
Dec 13 01:56:46.021574 kernel: ACPI: _OSC evaluated successfully for all CPUs
Dec 13 01:56:46.021579 kernel: ACPI: Interpreter enabled
Dec 13 01:56:46.021584 kernel: ACPI: PM: (supports S0 S5)
Dec 13 01:56:46.021589 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 01:56:46.021595 kernel: HEST: Enabling Firmware First mode for corrected errors.
Dec 13 01:56:46.021600 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14.
Dec 13 01:56:46.021605 kernel: HEST: Table parsing has been initialized.
Dec 13 01:56:46.021611 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC.
Dec 13 01:56:46.021616 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 01:56:46.021622 kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 01:56:46.021627 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F
Dec 13 01:56:46.021633 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource
Dec 13 01:56:46.021638 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource
Dec 13 01:56:46.021644 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource
Dec 13 01:56:46.021649 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource
Dec 13 01:56:46.021654 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource
Dec 13 01:56:46.021660 kernel: ACPI: \_TZ_.FN00: New power resource
Dec 13 01:56:46.021665 kernel: ACPI: \_TZ_.FN01: New power resource
Dec 13 01:56:46.021671 kernel: ACPI: \_TZ_.FN02: New power resource
Dec 13 01:56:46.021677 kernel: ACPI: \_TZ_.FN03: New power resource
Dec 13 01:56:46.021682 kernel: ACPI: \_TZ_.FN04: New power resource
Dec 13 01:56:46.021687 kernel: ACPI: \PIN_: New power resource
Dec 13 01:56:46.021693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe])
Dec 13 01:56:46.021763 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 01:56:46.021816 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER]
Dec 13 01:56:46.021862 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR]
Dec 13 01:56:46.021871 kernel: PCI host bridge to bus 0000:00
Dec 13 01:56:46.021924 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 01:56:46.021966 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 01:56:46.022007 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 01:56:46.022048 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window]
Dec 13 01:56:46.022088 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window]
Dec 13 01:56:46.022128 kernel: pci_bus 0000:00: root bus resource [bus 00-fe]
Dec 13 01:56:46.022186 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000
Dec 13 01:56:46.022241 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400
Dec 13 01:56:46.022289 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.022340 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000
Dec 13 01:56:46.022386 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit]
Dec 13 01:56:46.022435 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000
Dec 13 01:56:46.022488 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit]
Dec 13 01:56:46.022539 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330
Dec 13 01:56:46.022586 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit]
Dec 13 01:56:46.022632 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold
Dec 13 01:56:46.022682 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000
Dec 13 01:56:46.022729 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit]
Dec 13 01:56:46.022777 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit]
Dec 13 01:56:46.022827 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000
Dec 13 01:56:46.022873 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
Dec 13 01:56:46.022926 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000
Dec 13 01:56:46.022971 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
Dec 13 01:56:46.023021 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000
Dec 13 01:56:46.023068 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit]
Dec 13 01:56:46.023116 kernel: pci 0000:00:16.0: PME# supported from D3hot
Dec 13 01:56:46.023172 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000
Dec 13 01:56:46.023221 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit]
Dec 13 01:56:46.023266 kernel: pci 0000:00:16.1: PME# supported from D3hot
Dec 13 01:56:46.023316 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000
Dec 13 01:56:46.023362 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit]
Dec 13 01:56:46.023411 kernel: pci 0000:00:16.4: PME# supported from D3hot
Dec 13 01:56:46.023461 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601
Dec 13 01:56:46.023533 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff]
Dec 13 01:56:46.023596 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff]
Dec 13 01:56:46.023641 kernel: pci 0000:00:17.0: reg 0x18: [io  0x6050-0x6057]
Dec 13 01:56:46.023687 kernel: pci 0000:00:17.0: reg 0x1c: [io  0x6040-0x6043]
Dec 13 01:56:46.023732 kernel: pci 0000:00:17.0: reg 0x20: [io  0x6020-0x603f]
Dec 13 01:56:46.023782 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff]
Dec 13 01:56:46.023827 kernel: pci 0000:00:17.0: PME# supported from D3hot
Dec 13 01:56:46.023878 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400
Dec 13 01:56:46.023925 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.023981 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400
Dec 13 01:56:46.024028 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024078 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400
Dec 13 01:56:46.024125 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024176 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400
Dec 13 01:56:46.024226 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024276 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400
Dec 13 01:56:46.024323 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.024372 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000
Dec 13 01:56:46.024419 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
Dec 13 01:56:46.024468 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100
Dec 13 01:56:46.024566 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500
Dec 13 01:56:46.024615 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit]
Dec 13 01:56:46.024660 kernel: pci 0000:00:1f.4: reg 0x20: [io  0xefa0-0xefbf]
Dec 13 01:56:46.024713 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000
Dec 13 01:56:46.024759 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff]
Dec 13 01:56:46.024812 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000
Dec 13 01:56:46.024860 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.024910 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref]
Dec 13 01:56:46.024957 kernel: pci 0000:01:00.0: PME# supported from D3cold
Dec 13 01:56:46.025005 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref]
Dec 13 01:56:46.025053 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs)
Dec 13 01:56:46.025107 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000
Dec 13 01:56:46.025156 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref]
Dec 13 01:56:46.025202 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref]
Dec 13 01:56:46.025252 kernel: pci 0000:01:00.1: PME# supported from D3cold
Dec 13 01:56:46.025300 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref]
Dec 13 01:56:46.025347 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs)
Dec 13 01:56:46.025394 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Dec 13 01:56:46.025440 kernel: pci 0000:00:01.0:   bridge window [mem 0x95100000-0x952fffff]
Dec 13 01:56:46.025495 kernel: pci 0000:00:01.0:   bridge window [mem 0x90000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.025578 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02]
Dec 13 01:56:46.025630 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect
Dec 13 01:56:46.025680 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000
Dec 13 01:56:46.025728 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff]
Dec 13 01:56:46.025775 kernel: pci 0000:03:00.0: reg 0x18: [io  0x5000-0x501f]
Dec 13 01:56:46.025823 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff]
Dec 13 01:56:46.025871 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.025917 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03]
Dec 13 01:56:46.025964 kernel: pci 0000:00:1b.4:   bridge window [io  0x5000-0x5fff]
Dec 13 01:56:46.026012 kernel: pci 0000:00:1b.4:   bridge window [mem 0x95400000-0x954fffff]
Dec 13 01:56:46.026065 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect
Dec 13 01:56:46.026112 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000
Dec 13 01:56:46.026159 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff]
Dec 13 01:56:46.026207 kernel: pci 0000:04:00.0: reg 0x18: [io  0x4000-0x401f]
Dec 13 01:56:46.026254 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff]
Dec 13 01:56:46.026302 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
Dec 13 01:56:46.026352 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04]
Dec 13 01:56:46.026399 kernel: pci 0000:00:1b.5:   bridge window [io  0x4000-0x4fff]
Dec 13 01:56:46.026444 kernel: pci 0000:00:1b.5:   bridge window [mem 0x95300000-0x953fffff]
Dec 13 01:56:46.026548 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05]
Dec 13 01:56:46.026614 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400
Dec 13 01:56:46.026663 kernel: pci 0000:06:00.0: enabling Extended Tags
Dec 13 01:56:46.026711 kernel: pci 0000:06:00.0: supports D1 D2
Dec 13 01:56:46.026758 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Dec 13 01:56:46.026808 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07]
Dec 13 01:56:46.026855 kernel: pci 0000:00:1c.3:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.026901 kernel: pci 0000:00:1c.3:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.026955 kernel: pci_bus 0000:07: extended config space not accessible
Dec 13 01:56:46.027011 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000
Dec 13 01:56:46.027062 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff]
Dec 13 01:56:46.027114 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff]
Dec 13 01:56:46.027165 kernel: pci 0000:07:00.0: reg 0x18: [io  0x3000-0x307f]
Dec 13 01:56:46.027214 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 01:56:46.027262 kernel: pci 0000:07:00.0: supports D1 D2
Dec 13 01:56:46.027312 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Dec 13 01:56:46.027360 kernel: pci 0000:06:00.0: PCI bridge to [bus 07]
Dec 13 01:56:46.027407 kernel: pci 0000:06:00.0:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.027454 kernel: pci 0000:06:00.0:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.027463 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0
Dec 13 01:56:46.027469 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1
Dec 13 01:56:46.027478 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0
Dec 13 01:56:46.027501 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0
Dec 13 01:56:46.027507 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0
Dec 13 01:56:46.027512 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0
Dec 13 01:56:46.027533 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0
Dec 13 01:56:46.027538 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0
Dec 13 01:56:46.027544 kernel: iommu: Default domain type: Translated
Dec 13 01:56:46.027551 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 01:56:46.027556 kernel: PCI: Using ACPI for IRQ routing
Dec 13 01:56:46.027562 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 01:56:46.027568 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff]
Dec 13 01:56:46.027573 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff]
Dec 13 01:56:46.027579 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff]
Dec 13 01:56:46.027584 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff]
Dec 13 01:56:46.027590 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff]
Dec 13 01:56:46.027595 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff]
Dec 13 01:56:46.027646 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device
Dec 13 01:56:46.027697 kernel: pci 0000:07:00.0: vgaarb: bridge control possible
Dec 13 01:56:46.027745 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 01:56:46.027754 kernel: vgaarb: loaded
Dec 13 01:56:46.027760 kernel: clocksource: Switched to clocksource tsc-early
Dec 13 01:56:46.027766 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 01:56:46.027771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 01:56:46.027777 kernel: pnp: PnP ACPI init
Dec 13 01:56:46.027826 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved
Dec 13 01:56:46.027875 kernel: pnp 00:02: [dma 0 disabled]
Dec 13 01:56:46.027922 kernel: pnp 00:03: [dma 0 disabled]
Dec 13 01:56:46.027971 kernel: system 00:04: [io  0x0680-0x069f] has been reserved
Dec 13 01:56:46.028013 kernel: system 00:04: [io  0x164e-0x164f] has been reserved
Dec 13 01:56:46.028058 kernel: system 00:05: [io  0x1854-0x1857] has been reserved
Dec 13 01:56:46.028103 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved
Dec 13 01:56:46.028149 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved
Dec 13 01:56:46.028190 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved
Dec 13 01:56:46.028233 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved
Dec 13 01:56:46.028278 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved
Dec 13 01:56:46.028321 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved
Dec 13 01:56:46.028363 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved
Dec 13 01:56:46.028407 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved
Dec 13 01:56:46.028455 kernel: system 00:07: [io  0x1800-0x18fe] could not be reserved
Dec 13 01:56:46.028520 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved
Dec 13 01:56:46.028576 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved
Dec 13 01:56:46.028618 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved
Dec 13 01:56:46.028661 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved
Dec 13 01:56:46.028703 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved
Dec 13 01:56:46.028748 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved
Dec 13 01:56:46.028793 kernel: system 00:08: [io  0x2000-0x20fe] has been reserved
Dec 13 01:56:46.028801 kernel: pnp: PnP ACPI: found 10 devices
Dec 13 01:56:46.028807 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 01:56:46.028813 kernel: NET: Registered PF_INET protocol family
Dec 13 01:56:46.028819 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 01:56:46.028825 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear)
Dec 13 01:56:46.028831 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 01:56:46.028836 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 01:56:46.028844 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
Dec 13 01:56:46.028849 kernel: TCP: Hash tables configured (established 262144 bind 65536)
Dec 13 01:56:46.028856 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.028862 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear)
Dec 13 01:56:46.028867 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 01:56:46.028873 kernel: NET: Registered PF_XDP protocol family
Dec 13 01:56:46.028920 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit]
Dec 13 01:56:46.028967 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit]
Dec 13 01:56:46.029016 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit]
Dec 13 01:56:46.029064 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029112 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029160 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029208 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref]
Dec 13 01:56:46.029255 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Dec 13 01:56:46.029302 kernel: pci 0000:00:01.0:   bridge window [mem 0x95100000-0x952fffff]
Dec 13 01:56:46.029348 kernel: pci 0000:00:01.0:   bridge window [mem 0x90000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.029397 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02]
Dec 13 01:56:46.029443 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03]
Dec 13 01:56:46.029510 kernel: pci 0000:00:1b.4:   bridge window [io  0x5000-0x5fff]
Dec 13 01:56:46.029571 kernel: pci 0000:00:1b.4:   bridge window [mem 0x95400000-0x954fffff]
Dec 13 01:56:46.029616 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04]
Dec 13 01:56:46.029665 kernel: pci 0000:00:1b.5:   bridge window [io  0x4000-0x4fff]
Dec 13 01:56:46.029711 kernel: pci 0000:00:1b.5:   bridge window [mem 0x95300000-0x953fffff]
Dec 13 01:56:46.029757 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05]
Dec 13 01:56:46.029804 kernel: pci 0000:06:00.0: PCI bridge to [bus 07]
Dec 13 01:56:46.029852 kernel: pci 0000:06:00.0:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.029900 kernel: pci 0000:06:00.0:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.029945 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07]
Dec 13 01:56:46.029992 kernel: pci 0000:00:1c.3:   bridge window [io  0x3000-0x3fff]
Dec 13 01:56:46.030037 kernel: pci 0000:00:1c.3:   bridge window [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.030083 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc
Dec 13 01:56:46.030125 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 01:56:46.030167 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 01:56:46.030208 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 01:56:46.030248 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window]
Dec 13 01:56:46.030288 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window]
Dec 13 01:56:46.030335 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff]
Dec 13 01:56:46.030380 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref]
Dec 13 01:56:46.030429 kernel: pci_bus 0000:03: resource 0 [io  0x5000-0x5fff]
Dec 13 01:56:46.030472 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff]
Dec 13 01:56:46.030554 kernel: pci_bus 0000:04: resource 0 [io  0x4000-0x4fff]
Dec 13 01:56:46.030597 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff]
Dec 13 01:56:46.030643 kernel: pci_bus 0000:06: resource 0 [io  0x3000-0x3fff]
Dec 13 01:56:46.030688 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.030734 kernel: pci_bus 0000:07: resource 0 [io  0x3000-0x3fff]
Dec 13 01:56:46.030777 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff]
Dec 13 01:56:46.030785 kernel: PCI: CLS 64 bytes, default 64
Dec 13 01:56:46.030791 kernel: DMAR: No ATSR found
Dec 13 01:56:46.030797 kernel: DMAR: No SATC found
Dec 13 01:56:46.030803 kernel: DMAR: dmar0: Using Queued invalidation
Dec 13 01:56:46.030849 kernel: pci 0000:00:00.0: Adding to iommu group 0
Dec 13 01:56:46.030899 kernel: pci 0000:00:01.0: Adding to iommu group 1
Dec 13 01:56:46.030945 kernel: pci 0000:00:08.0: Adding to iommu group 2
Dec 13 01:56:46.030992 kernel: pci 0000:00:12.0: Adding to iommu group 3
Dec 13 01:56:46.031037 kernel: pci 0000:00:14.0: Adding to iommu group 4
Dec 13 01:56:46.031083 kernel: pci 0000:00:14.2: Adding to iommu group 4
Dec 13 01:56:46.031128 kernel: pci 0000:00:15.0: Adding to iommu group 5
Dec 13 01:56:46.031174 kernel: pci 0000:00:15.1: Adding to iommu group 5
Dec 13 01:56:46.031219 kernel: pci 0000:00:16.0: Adding to iommu group 6
Dec 13 01:56:46.031265 kernel: pci 0000:00:16.1: Adding to iommu group 6
Dec 13 01:56:46.031313 kernel: pci 0000:00:16.4: Adding to iommu group 6
Dec 13 01:56:46.031359 kernel: pci 0000:00:17.0: Adding to iommu group 7
Dec 13 01:56:46.031404 kernel: pci 0000:00:1b.0: Adding to iommu group 8
Dec 13 01:56:46.031450 kernel: pci 0000:00:1b.4: Adding to iommu group 9
Dec 13 01:56:46.031499 kernel: pci 0000:00:1b.5: Adding to iommu group 10
Dec 13 01:56:46.031585 kernel: pci 0000:00:1c.0: Adding to iommu group 11
Dec 13 01:56:46.031631 kernel: pci 0000:00:1c.3: Adding to iommu group 12
Dec 13 01:56:46.031676 kernel: pci 0000:00:1e.0: Adding to iommu group 13
Dec 13 01:56:46.031725 kernel: pci 0000:00:1f.0: Adding to iommu group 14
Dec 13 01:56:46.031771 kernel: pci 0000:00:1f.4: Adding to iommu group 14
Dec 13 01:56:46.031818 kernel: pci 0000:00:1f.5: Adding to iommu group 14
Dec 13 01:56:46.031864 kernel: pci 0000:01:00.0: Adding to iommu group 1
Dec 13 01:56:46.031913 kernel: pci 0000:01:00.1: Adding to iommu group 1
Dec 13 01:56:46.031961 kernel: pci 0000:03:00.0: Adding to iommu group 15
Dec 13 01:56:46.032009 kernel: pci 0000:04:00.0: Adding to iommu group 16
Dec 13 01:56:46.032056 kernel: pci 0000:06:00.0: Adding to iommu group 17
Dec 13 01:56:46.032108 kernel: pci 0000:07:00.0: Adding to iommu group 17
Dec 13 01:56:46.032116 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O
Dec 13 01:56:46.032122 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 01:56:46.032128 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB)
Dec 13 01:56:46.032134 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer
Dec 13 01:56:46.032140 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
Dec 13 01:56:46.032145 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules
Dec 13 01:56:46.032151 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules
Dec 13 01:56:46.032200 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found)
Dec 13 01:56:46.032210 kernel: Initialise system trusted keyrings
Dec 13 01:56:46.032216 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0
Dec 13 01:56:46.032221 kernel: Key type asymmetric registered
Dec 13 01:56:46.032227 kernel: Asymmetric key parser 'x509' registered
Dec 13 01:56:46.032232 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Dec 13 01:56:46.032238 kernel: io scheduler mq-deadline registered
Dec 13 01:56:46.032244 kernel: io scheduler kyber registered
Dec 13 01:56:46.032250 kernel: io scheduler bfq registered
Dec 13 01:56:46.032296 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121
Dec 13 01:56:46.032343 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122
Dec 13 01:56:46.032388 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123
Dec 13 01:56:46.032435 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124
Dec 13 01:56:46.032483 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125
Dec 13 01:56:46.032568 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126
Dec 13 01:56:46.032618 kernel: thermal LNXTHERM:00: registered as thermal_zone0
Dec 13 01:56:46.032628 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C)
Dec 13 01:56:46.032634 kernel: ERST: Error Record Serialization Table (ERST) support is initialized.
Dec 13 01:56:46.032639 kernel: pstore: Using crash dump compression: deflate
Dec 13 01:56:46.032645 kernel: pstore: Registered erst as persistent store backend
Dec 13 01:56:46.032651 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 01:56:46.032657 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 01:56:46.032662 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 01:56:46.032668 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Dec 13 01:56:46.032674 kernel: hpet_acpi_add: no address or irqs in _CRS
Dec 13 01:56:46.032725 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16)
Dec 13 01:56:46.032734 kernel: i8042: PNP: No PS/2 controller found.
Dec 13 01:56:46.032775 kernel: rtc_cmos rtc_cmos: RTC can wake from S4
Dec 13 01:56:46.032818 kernel: rtc_cmos rtc_cmos: registered as rtc0
Dec 13 01:56:46.032860 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T01:56:44 UTC (1734055004)
Dec 13 01:56:46.032903 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram
Dec 13 01:56:46.032911 kernel: intel_pstate: Intel P-state driver initializing
Dec 13 01:56:46.032917 kernel: intel_pstate: Disabling energy efficiency optimization
Dec 13 01:56:46.032924 kernel: intel_pstate: HWP enabled
Dec 13 01:56:46.032930 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0
Dec 13 01:56:46.032935 kernel: vesafb: scrolling: redraw
Dec 13 01:56:46.032941 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0
Dec 13 01:56:46.032947 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000a02b91ea, using 768k, total 768k
Dec 13 01:56:46.032953 kernel: Console: switching to colour frame buffer device 128x48
Dec 13 01:56:46.032958 kernel: fb0: VESA VGA frame buffer device
Dec 13 01:56:46.032964 kernel: NET: Registered PF_INET6 protocol family
Dec 13 01:56:46.032970 kernel: Segment Routing with IPv6
Dec 13 01:56:46.032976 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 01:56:46.032982 kernel: NET: Registered PF_PACKET protocol family
Dec 13 01:56:46.032988 kernel: Key type dns_resolver registered
Dec 13 01:56:46.032993 kernel: microcode: Microcode Update Driver: v2.2.
Dec 13 01:56:46.032999 kernel: IPI shorthand broadcast: enabled
Dec 13 01:56:46.033005 kernel: sched_clock: Marking stable (2477000770, 1385644813)->(4406217006, -543571423)
Dec 13 01:56:46.033010 kernel: registered taskstats version 1
Dec 13 01:56:46.033016 kernel: Loading compiled-in X.509 certificates
Dec 13 01:56:46.033022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0'
Dec 13 01:56:46.033029 kernel: Key type .fscrypt registered
Dec 13 01:56:46.033034 kernel: Key type fscrypt-provisioning registered
Dec 13 01:56:46.033040 kernel: ima: Allocated hash algorithm: sha1
Dec 13 01:56:46.033046 kernel: ima: No architecture policies found
Dec 13 01:56:46.033051 kernel: clk: Disabling unused clocks
Dec 13 01:56:46.033057 kernel: Freeing unused kernel image (initmem) memory: 42844K
Dec 13 01:56:46.033063 kernel: Write protecting the kernel read-only data: 36864k
Dec 13 01:56:46.033069 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K
Dec 13 01:56:46.033075 kernel: Run /init as init process
Dec 13 01:56:46.033081 kernel:   with arguments:
Dec 13 01:56:46.033087 kernel:     /init
Dec 13 01:56:46.033092 kernel:   with environment:
Dec 13 01:56:46.033098 kernel:     HOME=/
Dec 13 01:56:46.033103 kernel:     TERM=linux
Dec 13 01:56:46.033109 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 01:56:46.033116 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Dec 13 01:56:46.033124 systemd[1]: Detected architecture x86-64.
Dec 13 01:56:46.033130 systemd[1]: Running in initrd.
Dec 13 01:56:46.033136 systemd[1]: No hostname configured, using default hostname.
Dec 13 01:56:46.033142 systemd[1]: Hostname set to <localhost>.
Dec 13 01:56:46.033148 systemd[1]: Initializing machine ID from random generator.
Dec 13 01:56:46.033154 systemd[1]: Queued start job for default target initrd.target.
Dec 13 01:56:46.033160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:56:46.033166 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:56:46.033173 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Dec 13 01:56:46.033179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Dec 13 01:56:46.033185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Dec 13 01:56:46.033191 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Dec 13 01:56:46.033197 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Dec 13 01:56:46.033204 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz
Dec 13 01:56:46.033209 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns
Dec 13 01:56:46.033216 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Dec 13 01:56:46.033222 kernel: clocksource: Switched to clocksource tsc
Dec 13 01:56:46.033228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:56:46.033234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:56:46.033240 systemd[1]: Reached target paths.target - Path Units.
Dec 13 01:56:46.033246 systemd[1]: Reached target slices.target - Slice Units.
Dec 13 01:56:46.033252 systemd[1]: Reached target swap.target - Swaps.
Dec 13 01:56:46.033258 systemd[1]: Reached target timers.target - Timer Units.
Dec 13 01:56:46.033264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 01:56:46.033271 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 01:56:46.033277 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Dec 13 01:56:46.033283 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Dec 13 01:56:46.033289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:56:46.033295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:56:46.033301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:56:46.033307 systemd[1]: Reached target sockets.target - Socket Units.
Dec 13 01:56:46.033313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Dec 13 01:56:46.033320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Dec 13 01:56:46.033325 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Dec 13 01:56:46.033331 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 01:56:46.033337 systemd[1]: Starting systemd-journald.service - Journal Service...
Dec 13 01:56:46.033353 systemd-journald[267]: Collecting audit messages is disabled.
Dec 13 01:56:46.033368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Dec 13 01:56:46.033375 systemd-journald[267]: Journal started
Dec 13 01:56:46.033388 systemd-journald[267]: Runtime Journal (/run/log/journal/d54a3510d3c047118c82236dccc067d9) is 8.0M, max 639.9M, 631.9M free.
Dec 13 01:56:46.056861 systemd-modules-load[269]: Inserted module 'overlay'
Dec 13 01:56:46.079478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:56:46.107519 systemd[1]: Started systemd-journald.service - Journal Service.
Dec 13 01:56:46.107823 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Dec 13 01:56:46.179724 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 01:56:46.179737 kernel: Bridge firewalling registered
Dec 13 01:56:46.163659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:56:46.169024 systemd-modules-load[269]: Inserted module 'br_netfilter'
Dec 13 01:56:46.191788 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 01:56:46.211861 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:56:46.220837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:56:46.264892 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:56:46.268642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Dec 13 01:56:46.288176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Dec 13 01:56:46.288632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Dec 13 01:56:46.291944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:56:46.293404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Dec 13 01:56:46.294163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Dec 13 01:56:46.295197 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:56:46.296302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 13 01:56:46.300409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:56:46.311809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:56:46.319882 systemd-resolved[302]: Positive Trust Anchors:
Dec 13 01:56:46.319889 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 01:56:46.319922 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Dec 13 01:56:46.322093 systemd-resolved[302]: Defaulting to hostname 'linux'.
Dec 13 01:56:46.322777 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Dec 13 01:56:46.355912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:56:46.371732 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Dec 13 01:56:46.486703 dracut-cmdline[308]: dracut-dracut-053
Dec 13 01:56:46.494780 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff
Dec 13 01:56:46.694506 kernel: SCSI subsystem initialized
Dec 13 01:56:46.716521 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 01:56:46.739519 kernel: iscsi: registered transport (tcp)
Dec 13 01:56:46.770426 kernel: iscsi: registered transport (qla4xxx)
Dec 13 01:56:46.770442 kernel: QLogic iSCSI HBA Driver
Dec 13 01:56:46.803711 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Dec 13 01:56:46.822767 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Dec 13 01:56:46.908393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 01:56:46.908422 kernel: device-mapper: uevent: version 1.0.3
Dec 13 01:56:46.928145 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Dec 13 01:56:46.985557 kernel: raid6: avx2x4   gen() 53518 MB/s
Dec 13 01:56:47.017556 kernel: raid6: avx2x2   gen() 54032 MB/s
Dec 13 01:56:47.053971 kernel: raid6: avx2x1   gen() 45310 MB/s
Dec 13 01:56:47.053987 kernel: raid6: using algorithm avx2x2 gen() 54032 MB/s
Dec 13 01:56:47.101002 kernel: raid6: .... xor() 31060 MB/s, rmw enabled
Dec 13 01:56:47.101022 kernel: raid6: using avx2x2 recovery algorithm
Dec 13 01:56:47.142480 kernel: xor: automatically using best checksumming function   avx       
Dec 13 01:56:47.254535 kernel: Btrfs loaded, zoned=no, fsverity=no
Dec 13 01:56:47.259993 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 01:56:47.284790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:56:47.291296 systemd-udevd[492]: Using default interface naming scheme 'v255'.
Dec 13 01:56:47.296727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:56:47.333744 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Dec 13 01:56:47.368997 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation
Dec 13 01:56:47.420427 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 01:56:47.448918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Dec 13 01:56:47.536686 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:56:47.569467 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 01:56:47.569513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 01:56:47.580575 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 01:56:47.580649 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Dec 13 01:56:47.597840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 01:56:47.658696 kernel: ACPI: bus type USB registered
Dec 13 01:56:47.658709 kernel: usbcore: registered new interface driver usbfs
Dec 13 01:56:47.658716 kernel: usbcore: registered new interface driver hub
Dec 13 01:56:47.658724 kernel: usbcore: registered new device driver usb
Dec 13 01:56:47.597945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:56:47.709579 kernel: PTP clock support registered
Dec 13 01:56:47.709602 kernel: libata version 3.00 loaded.
Dec 13 01:56:47.709615 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 01:56:47.709628 kernel: AES CTR mode by8 optimization enabled
Dec 13 01:56:47.709641 kernel: ahci 0000:00:17.0: version 3.0
Dec 13 01:56:48.203090 kernel: igb: Intel(R) Gigabit Ethernet Network Driver
Dec 13 01:56:48.203106 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode
Dec 13 01:56:48.203195 kernel: igb: Copyright (c) 2007-2014 Intel Corporation.
Dec 13 01:56:48.203206 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst 
Dec 13 01:56:48.203281 kernel: pps pps0: new PPS source ptp0
Dec 13 01:56:48.203360 kernel: scsi host0: ahci
Dec 13 01:56:48.203433 kernel: igb 0000:03:00.0: added PHC on eth0
Dec 13 01:56:48.203516 kernel: scsi host1: ahci
Dec 13 01:56:48.203586 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection
Dec 13 01:56:48.203662 kernel: scsi host2: ahci
Dec 13 01:56:48.203732 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Dec 13 01:56:48.203805 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1
Dec 13 01:56:48.203875 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810
Dec 13 01:56:48.203947 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Dec 13 01:56:48.204017 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2
Dec 13 01:56:48.204086 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed
Dec 13 01:56:48.204158 kernel: hub 1-0:1.0: USB hub found
Dec 13 01:56:48.204242 kernel: hub 1-0:1.0: 16 ports detected
Dec 13 01:56:48.204318 kernel: hub 2-0:1.0: USB hub found
Dec 13 01:56:48.204400 kernel: hub 2-0:1.0: 10 ports detected
Dec 13 01:56:48.204479 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d8
Dec 13 01:56:48.204556 kernel: scsi host3: ahci
Dec 13 01:56:48.204624 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000
Dec 13 01:56:48.204699 kernel: scsi host4: ahci
Dec 13 01:56:48.204770 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
Dec 13 01:56:48.204842 kernel: scsi host5: ahci
Dec 13 01:56:48.204912 kernel: pps pps1: new PPS source ptp1
Dec 13 01:56:48.204978 kernel: scsi host6: ahci
Dec 13 01:56:48.205045 kernel: igb 0000:04:00.0: added PHC on eth1
Dec 13 01:56:48.205124 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127
Dec 13 01:56:48.205134 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection
Dec 13 01:56:48.205207 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127
Dec 13 01:56:48.205217 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0a:d9
Dec 13 01:56:48.205289 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127
Dec 13 01:56:48.205299 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000
Dec 13 01:56:48.205370 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127
Dec 13 01:56:48.205380 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
Dec 13 01:56:48.205451 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127
Dec 13 01:56:48.205462 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd
Dec 13 01:56:48.366948 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127
Dec 13 01:56:48.366958 kernel: hub 1-14:1.0: USB hub found
Dec 13 01:56:48.367033 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127
Dec 13 01:56:48.367042 kernel: hub 1-14:1.0: 4 ports detected
Dec 13 01:56:47.709621 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:56:48.417577 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016
Dec 13 01:56:48.878546 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
Dec 13 01:56:48.878628 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878638 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878646 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878653 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Dec 13 01:56:48.878661 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878668 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Dec 13 01:56:48.878679 kernel: ata7: SATA link down (SStatus 0 SControl 300)
Dec 13 01:56:48.878688 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT,  D3MU001, max UDMA/133
Dec 13 01:56:48.878695 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384)
Dec 13 01:56:48.878763 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT,  D3MU001, max UDMA/133
Dec 13 01:56:48.878771 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged
Dec 13 01:56:48.878835 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd
Dec 13 01:56:48.878941 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA
Dec 13 01:56:48.878950 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA
Dec 13 01:56:48.878959 kernel: ata1.00: Features: NCQ-prio
Dec 13 01:56:48.878967 kernel: ata2.00: Features: NCQ-prio
Dec 13 01:56:48.878975 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 01:56:48.878982 kernel: ata1.00: configured for UDMA/133
Dec 13 01:56:48.878990 kernel: ata2.00: configured for UDMA/133
Dec 13 01:56:48.878997 kernel: scsi 0:0:0:0: Direct-Access     ATA      Micron_5300_MTFD U001 PQ: 0 ANSI: 5
Dec 13 01:56:48.879071 kernel: scsi 1:0:0:0: Direct-Access     ATA      Micron_5300_MTFD U001 PQ: 0 ANSI: 5
Dec 13 01:56:48.879136 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
Dec 13 01:56:47.728513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:56:48.909575 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016
Dec 13 01:56:49.573597 kernel: igb 0000:03:00.0 eno1: renamed from eth0
Dec 13 01:56:49.573678 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
Dec 13 01:56:49.573745 kernel: usbcore: registered new interface driver usbhid
Dec 13 01:56:49.573758 kernel: usbhid: USB HID core driver
Dec 13 01:56:49.573765 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0
Dec 13 01:56:49.573773 kernel: igb 0000:04:00.0 eno2: renamed from eth1
Dec 13 01:56:49.573837 kernel: ata2.00: Enabling discard_zeroes_data
Dec 13 01:56:49.573845 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0
Dec 13 01:56:49.573918 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.573926 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB)
Dec 13 01:56:49.573990 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB)
Dec 13 01:56:49.574049 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks
Dec 13 01:56:49.574106 kernel: sd 1:0:0:0: [sda] Write Protect is off
Dec 13 01:56:49.574162 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00
Dec 13 01:56:49.574218 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 13 01:56:49.574274 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes
Dec 13 01:56:49.574330 kernel: ata2.00: Enabling discard_zeroes_data
Dec 13 01:56:49.574338 kernel: sd 1:0:0:0: [sda] Attached SCSI disk
Dec 13 01:56:49.574394 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1
Dec 13 01:56:49.574403 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks
Dec 13 01:56:49.574459 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1
Dec 13 01:56:49.574569 kernel: sd 0:0:0:0: [sdb] Write Protect is off
Dec 13 01:56:49.574625 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384)
Dec 13 01:56:49.574685 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00
Dec 13 01:56:49.574741 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged
Dec 13 01:56:49.574803 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 13 01:56:49.574861 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes
Dec 13 01:56:49.574917 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.574925 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 13 01:56:49.574932 kernel: GPT:9289727 != 937703087
Dec 13 01:56:49.574939 kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 13 01:56:49.574946 kernel: GPT:9289727 != 937703087
Dec 13 01:56:49.574953 kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 13 01:56:49.574961 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:49.574969 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk
Dec 13 01:56:49.575025 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sdb3 scanned by (udev-worker) (698)
Dec 13 01:56:49.575033 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic)
Dec 13 01:56:49.575092 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (561)
Dec 13 01:56:47.728680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:56:49.687646 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2
Dec 13 01:56:49.687815 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0
Dec 13 01:56:47.788632 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:56:48.398717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:56:48.430680 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Dec 13 01:56:48.441874 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 01:56:48.441909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:56:48.441933 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Dec 13 01:56:48.451632 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Dec 13 01:56:48.480652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:56:48.495883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Dec 13 01:56:49.936578 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.936594 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:49.936602 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.936609 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:49.936616 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:49.936622 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:48.506679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 01:56:49.946554 disk-uuid[725]: Primary Header is updated.
Dec 13 01:56:49.946554 disk-uuid[725]: Secondary Entries is updated.
Dec 13 01:56:49.946554 disk-uuid[725]: Secondary Header is updated.
Dec 13 01:56:48.554209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:56:49.556241 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT.
Dec 13 01:56:49.674108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM.
Dec 13 01:56:49.702901 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A.
Dec 13 01:56:49.721632 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A.
Dec 13 01:56:49.757172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM.
Dec 13 01:56:49.795806 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Dec 13 01:56:50.898330 kernel: ata1.00: Enabling discard_zeroes_data
Dec 13 01:56:50.918001 disk-uuid[726]: The operation has completed successfully.
Dec 13 01:56:50.926596 kernel:  sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9
Dec 13 01:56:50.951086 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 01:56:50.951133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Dec 13 01:56:50.988788 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Dec 13 01:56:51.026679 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Dec 13 01:56:51.026742 sh[744]: Success
Dec 13 01:56:51.063868 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Dec 13 01:56:51.082506 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Dec 13 01:56:51.086129 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Dec 13 01:56:51.141623 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be
Dec 13 01:56:51.141641 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:56:51.162684 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Dec 13 01:56:51.181376 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Dec 13 01:56:51.198728 kernel: BTRFS info (device dm-0): using free space tree
Dec 13 01:56:51.233554 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Dec 13 01:56:51.234358 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Dec 13 01:56:51.242987 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Dec 13 01:56:51.253530 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Dec 13 01:56:51.278129 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Dec 13 01:56:51.382501 kernel: BTRFS info (device sdb6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:56:51.382534 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:56:51.382557 kernel: BTRFS info (device sdb6): using free space tree
Dec 13 01:56:51.382564 kernel: BTRFS info (device sdb6): enabling ssd optimizations
Dec 13 01:56:51.382571 kernel: BTRFS info (device sdb6): auto enabling async discard
Dec 13 01:56:51.375340 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 01:56:51.405540 kernel: BTRFS info (device sdb6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:56:51.411834 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Dec 13 01:56:51.436690 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Dec 13 01:56:51.457992 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 01:56:51.464693 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Dec 13 01:56:51.496801 ignition[838]: Ignition 2.19.0
Dec 13 01:56:51.496806 ignition[838]: Stage: fetch-offline
Dec 13 01:56:51.498863 unknown[838]: fetched base config from "system"
Dec 13 01:56:51.496829 ignition[838]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:56:51.498867 unknown[838]: fetched user config from "system"
Dec 13 01:56:51.496834 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:56:51.507794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 01:56:51.496889 ignition[838]: parsed url from cmdline: ""
Dec 13 01:56:51.519414 systemd-networkd[929]: lo: Link UP
Dec 13 01:56:51.496891 ignition[838]: no config URL provided
Dec 13 01:56:51.519417 systemd-networkd[929]: lo: Gained carrier
Dec 13 01:56:51.496893 ignition[838]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 01:56:51.521870 systemd-networkd[929]: Enumeration completed
Dec 13 01:56:51.496916 ignition[838]: parsing config with SHA512: 10589354275b781bfbc981a54b8b8eaf3106eb086182c96cbcf75e43b20c64d080333f9f181c8ed91f46a9fd849b55eecafa9e8a821fc5621366de0ab188b7a3
Dec 13 01:56:51.522622 systemd-networkd[929]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.499084 ignition[838]: fetch-offline: fetch-offline passed
Dec 13 01:56:51.524658 systemd[1]: Started systemd-networkd.service - Network Configuration.
Dec 13 01:56:51.499086 ignition[838]: POST message to Packet Timeline
Dec 13 01:56:51.532002 systemd[1]: Reached target network.target - Network.
Dec 13 01:56:51.499089 ignition[838]: POST Status error: resource requires networking
Dec 13 01:56:51.547742 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Dec 13 01:56:51.499127 ignition[838]: Ignition finished successfully
Dec 13 01:56:51.550815 systemd-networkd[929]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.572183 ignition[942]: Ignition 2.19.0
Dec 13 01:56:51.559740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Dec 13 01:56:51.572189 ignition[942]: Stage: kargs
Dec 13 01:56:51.579285 systemd-networkd[929]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.572341 ignition[942]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:56:51.792587 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up
Dec 13 01:56:51.791885 systemd-networkd[929]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network.
Dec 13 01:56:51.572350 ignition[942]: no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:56:51.573106 ignition[942]: kargs: kargs passed
Dec 13 01:56:51.573109 ignition[942]: POST message to Packet Timeline
Dec 13 01:56:51.573121 ignition[942]: GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:56:51.573691 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42266->[::1]:53: read: connection refused
Dec 13 01:56:51.773806 ignition[942]: GET https://metadata.packet.net/metadata: attempt #2
Dec 13 01:56:51.774267 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33728->[::1]:53: read: connection refused
Dec 13 01:56:52.034606 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up
Dec 13 01:56:52.035848 systemd-networkd[929]: eno1: Link UP
Dec 13 01:56:52.036082 systemd-networkd[929]: eno2: Link UP
Dec 13 01:56:52.036227 systemd-networkd[929]: enp1s0f0np0: Link UP
Dec 13 01:56:52.036404 systemd-networkd[929]: enp1s0f0np0: Gained carrier
Dec 13 01:56:52.045650 systemd-networkd[929]: enp1s0f1np1: Link UP
Dec 13 01:56:52.063627 systemd-networkd[929]: enp1s0f0np0: DHCPv4 address 147.28.180.91/31, gateway 147.28.180.90 acquired from 145.40.83.140
Dec 13 01:56:52.174455 ignition[942]: GET https://metadata.packet.net/metadata: attempt #3
Dec 13 01:56:52.175588 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34831->[::1]:53: read: connection refused
Dec 13 01:56:52.817243 systemd-networkd[929]: enp1s0f1np1: Gained carrier
Dec 13 01:56:52.976121 ignition[942]: GET https://metadata.packet.net/metadata: attempt #4
Dec 13 01:56:52.977172 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58483->[::1]:53: read: connection refused
Dec 13 01:56:53.073083 systemd-networkd[929]: enp1s0f0np0: Gained IPv6LL
Dec 13 01:56:53.969076 systemd-networkd[929]: enp1s0f1np1: Gained IPv6LL
Dec 13 01:56:54.578581 ignition[942]: GET https://metadata.packet.net/metadata: attempt #5
Dec 13 01:56:54.579761 ignition[942]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48076->[::1]:53: read: connection refused
Dec 13 01:56:57.782145 ignition[942]: GET https://metadata.packet.net/metadata: attempt #6
Dec 13 01:56:58.559401 ignition[942]: GET result: OK
Dec 13 01:56:58.876161 ignition[942]: Ignition finished successfully
Dec 13 01:56:58.880638 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Dec 13 01:56:58.908765 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Dec 13 01:56:58.914976 ignition[962]: Ignition 2.19.0
Dec 13 01:56:58.914981 ignition[962]: Stage: disks
Dec 13 01:56:58.915096 ignition[962]: no configs at "/usr/lib/ignition/base.d"
Dec 13 01:56:58.915104 ignition[962]: no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:56:58.915645 ignition[962]: disks: disks passed
Dec 13 01:56:58.915648 ignition[962]: POST message to Packet Timeline
Dec 13 01:56:58.915657 ignition[962]: GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:56:59.443423 ignition[962]: GET result: OK
Dec 13 01:56:59.859457 ignition[962]: Ignition finished successfully
Dec 13 01:56:59.862863 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Dec 13 01:56:59.877836 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Dec 13 01:56:59.896758 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Dec 13 01:56:59.917740 systemd[1]: Reached target local-fs.target - Local File Systems.
Dec 13 01:56:59.939880 systemd[1]: Reached target sysinit.target - System Initialization.
Dec 13 01:56:59.959786 systemd[1]: Reached target basic.target - Basic System.
Dec 13 01:56:59.992766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Dec 13 01:57:00.025818 systemd-fsck[983]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Dec 13 01:57:00.035974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Dec 13 01:57:00.057699 systemd[1]: Mounting sysroot.mount - /sysroot...
Dec 13 01:57:00.154264 systemd[1]: Mounted sysroot.mount - /sysroot.
Dec 13 01:57:00.169728 kernel: EXT4-fs (sdb9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none.
Dec 13 01:57:00.162927 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Dec 13 01:57:00.185631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 01:57:00.189438 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Dec 13 01:57:00.309784 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (992)
Dec 13 01:57:00.309798 kernel: BTRFS info (device sdb6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:57:00.309807 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:57:00.309814 kernel: BTRFS info (device sdb6): using free space tree
Dec 13 01:57:00.309821 kernel: BTRFS info (device sdb6): enabling ssd optimizations
Dec 13 01:57:00.309831 kernel: BTRFS info (device sdb6): auto enabling async discard
Dec 13 01:57:00.293586 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Dec 13 01:57:00.327802 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent...
Dec 13 01:57:00.350554 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 01:57:00.350574 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 01:57:00.379781 coreos-metadata[994]: Dec 13 01:57:00.366 INFO Fetching https://metadata.packet.net/metadata: Attempt #1
Dec 13 01:57:00.411704 coreos-metadata[1010]: Dec 13 01:57:00.365 INFO Fetching https://metadata.packet.net/metadata: Attempt #1
Dec 13 01:57:00.362480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 01:57:00.397806 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Dec 13 01:57:00.441827 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Dec 13 01:57:00.469961 initrd-setup-root[1024]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 01:57:00.480608 initrd-setup-root[1031]: cut: /sysroot/etc/group: No such file or directory
Dec 13 01:57:00.491632 initrd-setup-root[1038]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 01:57:00.502611 initrd-setup-root[1045]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 01:57:00.527227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Dec 13 01:57:00.552675 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Dec 13 01:57:00.571656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Dec 13 01:57:00.605676 kernel: BTRFS info (device sdb6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:57:00.598263 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Dec 13 01:57:00.619330 ignition[1112]: INFO     : Ignition 2.19.0
Dec 13 01:57:00.619330 ignition[1112]: INFO     : Stage: mount
Dec 13 01:57:00.627588 ignition[1112]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:57:00.627588 ignition[1112]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:57:00.627588 ignition[1112]: INFO     : mount: mount passed
Dec 13 01:57:00.627588 ignition[1112]: INFO     : POST message to Packet Timeline
Dec 13 01:57:00.627588 ignition[1112]: INFO     : GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:57:00.622524 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Dec 13 01:57:01.026558 coreos-metadata[1010]: Dec 13 01:57:01.026 INFO Fetch successful
Dec 13 01:57:01.102874 systemd[1]: flatcar-static-network.service: Deactivated successfully.
Dec 13 01:57:01.102937 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent.
Dec 13 01:57:01.135554 coreos-metadata[994]: Dec 13 01:57:01.133 INFO Fetch successful
Dec 13 01:57:01.165344 coreos-metadata[994]: Dec 13 01:57:01.165 INFO wrote hostname ci-4081.2.1-a-5a9deb00aa to /sysroot/etc/hostname
Dec 13 01:57:01.166824 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Dec 13 01:57:01.231859 ignition[1112]: INFO     : GET result: OK
Dec 13 01:57:01.636537 ignition[1112]: INFO     : Ignition finished successfully
Dec 13 01:57:01.639367 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Dec 13 01:57:01.672749 systemd[1]: Starting ignition-files.service - Ignition (files)...
Dec 13 01:57:01.682754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Dec 13 01:57:01.742170 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1134)
Dec 13 01:57:01.742188 kernel: BTRFS info (device sdb6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb
Dec 13 01:57:01.761192 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 01:57:01.777980 kernel: BTRFS info (device sdb6): using free space tree
Dec 13 01:57:01.814292 kernel: BTRFS info (device sdb6): enabling ssd optimizations
Dec 13 01:57:01.814311 kernel: BTRFS info (device sdb6): auto enabling async discard
Dec 13 01:57:01.826802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Dec 13 01:57:01.854045 ignition[1151]: INFO     : Ignition 2.19.0
Dec 13 01:57:01.854045 ignition[1151]: INFO     : Stage: files
Dec 13 01:57:01.868755 ignition[1151]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:57:01.868755 ignition[1151]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 01:57:01.868755 ignition[1151]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 01:57:01.868755 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Dec 13 01:57:01.858232 unknown[1151]: wrote ssh authorized keys file for user: core
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.002727 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.255834 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Dec 13 01:57:02.344423 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Dec 13 01:57:02.622968 ignition[1151]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 01:57:02.622968 ignition[1151]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(d): [started]  setting preset to enabled for "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: op(d): [finished] setting preset to enabled for "prepare-helm.service"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: createResultFile: createFiles: op(e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 01:57:02.653804 ignition[1151]: INFO     : files: files passed
Dec 13 01:57:02.653804 ignition[1151]: INFO     : POST message to Packet Timeline
Dec 13 01:57:02.653804 ignition[1151]: INFO     : GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:57:03.262760 ignition[1151]: INFO     : GET result: OK
Dec 13 01:57:03.578117 ignition[1151]: INFO     : Ignition finished successfully
Dec 13 01:57:03.579767 systemd[1]: Finished ignition-files.service - Ignition (files).
Dec 13 01:57:03.615736 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Dec 13 01:57:03.616239 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Dec 13 01:57:03.644979 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 01:57:03.645055 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Dec 13 01:57:03.696760 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:57:03.696760 initrd-setup-root-after-ignition[1189]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:57:03.667029 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 01:57:03.755683 initrd-setup-root-after-ignition[1193]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 01:57:03.687785 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Dec 13 01:57:03.721733 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Dec 13 01:57:03.773910 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 01:57:03.774049 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Dec 13 01:57:03.792823 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Dec 13 01:57:03.812753 systemd[1]: Reached target initrd.target - Initrd Default Target.
Dec 13 01:57:03.830867 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Dec 13 01:57:03.844868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Dec 13 01:57:03.939542 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 01:57:03.957920 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Dec 13 01:57:04.008590 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:57:04.020102 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:57:04.042165 systemd[1]: Stopped target timers.target - Timer Units.
Dec 13 01:57:04.060092 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 01:57:04.060522 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Dec 13 01:57:04.089201 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Dec 13 01:57:04.111096 systemd[1]: Stopped target basic.target - Basic System.
Dec 13 01:57:04.129203 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Dec 13 01:57:04.148099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Dec 13 01:57:04.170101 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Dec 13 01:57:04.192109 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Dec 13 01:57:04.212103 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Dec 13 01:57:04.233135 systemd[1]: Stopped target sysinit.target - System Initialization.
Dec 13 01:57:04.254123 systemd[1]: Stopped target local-fs.target - Local File Systems.
Dec 13 01:57:04.274084 systemd[1]: Stopped target swap.target - Swaps.
Dec 13 01:57:04.292076 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 01:57:04.292496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Dec 13 01:57:04.326952 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:57:04.337120 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:57:04.358064 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Dec 13 01:57:04.358528 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:57:04.381084 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 01:57:04.381505 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Dec 13 01:57:04.413069 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 01:57:04.413570 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Dec 13 01:57:04.434296 systemd[1]: Stopped target paths.target - Path Units.
Dec 13 01:57:04.451963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 01:57:04.452444 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:57:04.473108 systemd[1]: Stopped target slices.target - Slice Units.
Dec 13 01:57:04.491102 systemd[1]: Stopped target sockets.target - Socket Units.
Dec 13 01:57:04.509073 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 01:57:04.509371 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Dec 13 01:57:04.529129 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 01:57:04.529427 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Dec 13 01:57:04.552167 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 01:57:04.667671 ignition[1214]: INFO     : Ignition 2.19.0
Dec 13 01:57:04.667671 ignition[1214]: INFO     : Stage: umount
Dec 13 01:57:04.667671 ignition[1214]: INFO     : no configs at "/usr/lib/ignition/base.d"
Dec 13 01:57:04.667671 ignition[1214]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/packet"
Dec 13 01:57:04.667671 ignition[1214]: INFO     : umount: umount passed
Dec 13 01:57:04.667671 ignition[1214]: INFO     : POST message to Packet Timeline
Dec 13 01:57:04.667671 ignition[1214]: INFO     : GET https://metadata.packet.net/metadata: attempt #1
Dec 13 01:57:04.552606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Dec 13 01:57:04.573173 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 01:57:04.573583 systemd[1]: Stopped ignition-files.service - Ignition (files).
Dec 13 01:57:04.592121 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Dec 13 01:57:04.592537 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Dec 13 01:57:04.626664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Dec 13 01:57:04.629609 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 01:57:04.629692 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:57:04.662666 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Dec 13 01:57:04.675592 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 01:57:04.675773 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:57:04.682813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 01:57:04.682875 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Dec 13 01:57:04.730676 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 01:57:04.733007 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 01:57:04.733139 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Dec 13 01:57:04.826924 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 01:57:04.827015 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Dec 13 01:57:05.244426 ignition[1214]: INFO     : GET result: OK
Dec 13 01:57:05.616443 ignition[1214]: INFO     : Ignition finished successfully
Dec 13 01:57:05.619628 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 01:57:05.619918 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Dec 13 01:57:05.636952 systemd[1]: Stopped target network.target - Network.
Dec 13 01:57:05.653761 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 01:57:05.653963 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Dec 13 01:57:05.673871 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 01:57:05.674032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Dec 13 01:57:05.691994 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 01:57:05.692150 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Dec 13 01:57:05.710974 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Dec 13 01:57:05.711142 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Dec 13 01:57:05.729972 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 01:57:05.730145 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Dec 13 01:57:05.749246 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Dec 13 01:57:05.760623 systemd-networkd[929]: enp1s0f0np0: DHCPv6 lease lost
Dec 13 01:57:05.766964 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Dec 13 01:57:05.769681 systemd-networkd[929]: enp1s0f1np1: DHCPv6 lease lost
Dec 13 01:57:05.785543 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 01:57:05.785819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Dec 13 01:57:05.805781 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 01:57:05.806176 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Dec 13 01:57:05.827158 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 01:57:05.827278 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:57:05.859549 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Dec 13 01:57:05.876662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 01:57:05.876702 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Dec 13 01:57:05.898855 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 01:57:05.898928 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:57:05.916865 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 01:57:05.916979 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:57:05.937963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 01:57:05.938127 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:57:05.957189 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:57:05.978889 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 01:57:05.979343 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:57:06.019016 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 01:57:06.019051 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:57:06.047709 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 01:57:06.047745 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:57:06.067667 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 01:57:06.067758 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Dec 13 01:57:06.100041 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 01:57:06.100202 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Dec 13 01:57:06.128930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 01:57:06.129088 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Dec 13 01:57:06.177725 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Dec 13 01:57:06.407727 systemd-journald[267]: Received SIGTERM from PID 1 (systemd).
Dec 13 01:57:06.220554 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 01:57:06.220598 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:57:06.241727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:57:06.241792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:57:06.264095 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 01:57:06.264338 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Dec 13 01:57:06.285332 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 01:57:06.285600 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Dec 13 01:57:06.307437 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Dec 13 01:57:06.335725 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Dec 13 01:57:06.354614 systemd[1]: Switching root.
Dec 13 01:57:06.511584 systemd-journald[267]: Journal stopped
Dec 13 01:57:09.161250 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:57:09.161265 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:57:09.161272 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:57:09.161278 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:57:09.161283 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:57:09.161288 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:57:09.161294 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 01:57:09.161299 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 01:57:09.161305 kernel: audit: type=1403 audit(1734055026.708:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 01:57:09.161311 systemd[1]: Successfully loaded SELinux policy in 154.382ms.
Dec 13 01:57:09.161319 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.355ms.
Dec 13 01:57:09.161326 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Dec 13 01:57:09.161332 systemd[1]: Detected architecture x86-64.
Dec 13 01:57:09.161337 systemd[1]: Detected first boot.
Dec 13 01:57:09.161344 systemd[1]: Hostname set to <ci-4081.2.1-a-5a9deb00aa>.
Dec 13 01:57:09.161351 systemd[1]: Initializing machine ID from random generator.
Dec 13 01:57:09.161358 zram_generator::config[1266]: No configuration found.
Dec 13 01:57:09.161364 systemd[1]: Populated /etc with preset unit settings.
Dec 13 01:57:09.161370 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 01:57:09.161376 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Dec 13 01:57:09.161383 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:57:09.161389 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Dec 13 01:57:09.161396 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Dec 13 01:57:09.161403 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Dec 13 01:57:09.161409 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Dec 13 01:57:09.161416 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Dec 13 01:57:09.161422 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Dec 13 01:57:09.161428 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Dec 13 01:57:09.161434 systemd[1]: Created slice user.slice - User and Session Slice.
Dec 13 01:57:09.161442 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Dec 13 01:57:09.161448 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Dec 13 01:57:09.161454 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Dec 13 01:57:09.161461 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Dec 13 01:57:09.161467 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Dec 13 01:57:09.161476 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Dec 13 01:57:09.161484 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1...
Dec 13 01:57:09.161508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Dec 13 01:57:09.161516 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Dec 13 01:57:09.161522 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Dec 13 01:57:09.161543 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Dec 13 01:57:09.161550 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Dec 13 01:57:09.161557 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Dec 13 01:57:09.161564 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Dec 13 01:57:09.161570 systemd[1]: Reached target slices.target - Slice Units.
Dec 13 01:57:09.161578 systemd[1]: Reached target swap.target - Swaps.
Dec 13 01:57:09.161584 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Dec 13 01:57:09.161591 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Dec 13 01:57:09.161597 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Dec 13 01:57:09.161604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Dec 13 01:57:09.161610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Dec 13 01:57:09.161618 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Dec 13 01:57:09.161625 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Dec 13 01:57:09.161632 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Dec 13 01:57:09.161638 systemd[1]: Mounting media.mount - External Media Directory...
Dec 13 01:57:09.161645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:09.161652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Dec 13 01:57:09.161658 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Dec 13 01:57:09.161666 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Dec 13 01:57:09.161673 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 01:57:09.161680 systemd[1]: Reached target machines.target - Containers.
Dec 13 01:57:09.161687 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Dec 13 01:57:09.161693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:57:09.161700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Dec 13 01:57:09.161706 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Dec 13 01:57:09.161713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:57:09.161720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Dec 13 01:57:09.161727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:57:09.161734 kernel: ACPI: bus type drm_connector registered
Dec 13 01:57:09.161740 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Dec 13 01:57:09.161747 kernel: fuse: init (API version 7.39)
Dec 13 01:57:09.161753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:57:09.161759 kernel: loop: module loaded
Dec 13 01:57:09.161766 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 01:57:09.161772 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 01:57:09.161780 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Dec 13 01:57:09.161786 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 01:57:09.161793 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 01:57:09.161799 systemd[1]: Starting systemd-journald.service - Journal Service...
Dec 13 01:57:09.161813 systemd-journald[1371]: Collecting audit messages is disabled.
Dec 13 01:57:09.161830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Dec 13 01:57:09.161837 systemd-journald[1371]: Journal started
Dec 13 01:57:09.161850 systemd-journald[1371]: Runtime Journal (/run/log/journal/37457d91e9984c408370d165a1051cb6) is 8.0M, max 639.9M, 631.9M free.
Dec 13 01:57:07.219558 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 01:57:07.241204 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6.
Dec 13 01:57:07.241584 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 01:57:09.214558 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Dec 13 01:57:09.248516 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Dec 13 01:57:09.282559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Dec 13 01:57:09.315976 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 01:57:09.316006 systemd[1]: Stopped verity-setup.service.
Dec 13 01:57:09.378524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:09.399680 systemd[1]: Started systemd-journald.service - Journal Service.
Dec 13 01:57:09.410060 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Dec 13 01:57:09.419752 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Dec 13 01:57:09.429755 systemd[1]: Mounted media.mount - External Media Directory.
Dec 13 01:57:09.439731 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Dec 13 01:57:09.449720 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Dec 13 01:57:09.459718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Dec 13 01:57:09.469835 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Dec 13 01:57:09.480912 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Dec 13 01:57:09.492058 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:57:09.492271 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Dec 13 01:57:09.504358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:57:09.504725 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:57:09.517402 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:57:09.517807 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Dec 13 01:57:09.528413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:57:09.528869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:57:09.541416 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 01:57:09.541817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Dec 13 01:57:09.553417 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:57:09.553804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:57:09.564435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Dec 13 01:57:09.576364 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Dec 13 01:57:09.589383 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Dec 13 01:57:09.602379 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Dec 13 01:57:09.639156 systemd[1]: Reached target network-pre.target - Preparation for Network.
Dec 13 01:57:09.660780 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Dec 13 01:57:09.671252 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Dec 13 01:57:09.681688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 01:57:09.681707 systemd[1]: Reached target local-fs.target - Local File Systems.
Dec 13 01:57:09.692392 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Dec 13 01:57:09.715916 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Dec 13 01:57:09.728426 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Dec 13 01:57:09.738751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:57:09.740096 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Dec 13 01:57:09.750066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Dec 13 01:57:09.760619 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:57:09.761278 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Dec 13 01:57:09.763927 systemd-journald[1371]: Time spent on flushing to /var/log/journal/37457d91e9984c408370d165a1051cb6 is 17.130ms for 1370 entries.
Dec 13 01:57:09.763927 systemd-journald[1371]: System Journal (/var/log/journal/37457d91e9984c408370d165a1051cb6) is 8.0M, max 195.6M, 187.6M free.
Dec 13 01:57:09.814941 systemd-journald[1371]: Received client request to flush runtime journal.
Dec 13 01:57:09.779603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 01:57:09.780217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Dec 13 01:57:09.786389 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Dec 13 01:57:09.797305 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Dec 13 01:57:09.808432 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Dec 13 01:57:09.817556 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Dec 13 01:57:09.837542 kernel: loop0: detected capacity change from 0 to 140768
Dec 13 01:57:09.848759 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Dec 13 01:57:09.864802 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Dec 13 01:57:09.878500 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 01:57:09.889740 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Dec 13 01:57:09.900687 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Dec 13 01:57:09.917686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Dec 13 01:57:09.928481 kernel: loop1: detected capacity change from 0 to 142488
Dec 13 01:57:09.938645 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Dec 13 01:57:09.951403 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Dec 13 01:57:09.973778 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Dec 13 01:57:09.990254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Dec 13 01:57:09.999482 kernel: loop2: detected capacity change from 0 to 8
Dec 13 01:57:10.010124 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 01:57:10.010760 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Dec 13 01:57:10.022112 udevadm[1406]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 01:57:10.022439 systemd-tmpfiles[1420]: ACLs are not supported, ignoring.
Dec 13 01:57:10.022449 systemd-tmpfiles[1420]: ACLs are not supported, ignoring.
Dec 13 01:57:10.024863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Dec 13 01:57:10.049484 kernel: loop3: detected capacity change from 0 to 211296
Dec 13 01:57:10.093539 ldconfig[1397]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 01:57:10.094588 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Dec 13 01:57:10.109485 kernel: loop4: detected capacity change from 0 to 140768
Dec 13 01:57:10.142583 kernel: loop5: detected capacity change from 0 to 142488
Dec 13 01:57:10.185557 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Dec 13 01:57:10.193526 kernel: loop6: detected capacity change from 0 to 8
Dec 13 01:57:10.212481 kernel: loop7: detected capacity change from 0 to 211296
Dec 13 01:57:10.223673 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Dec 13 01:57:10.224062 (sd-merge)[1426]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'.
Dec 13 01:57:10.224293 (sd-merge)[1426]: Merged extensions into '/usr'.
Dec 13 01:57:10.235867 systemd-udevd[1429]: Using default interface naming scheme 'v255'.
Dec 13 01:57:10.236195 systemd[1]: Reloading requested from client PID 1402 ('systemd-sysext') (unit systemd-sysext.service)...
Dec 13 01:57:10.236202 systemd[1]: Reloading...
Dec 13 01:57:10.281532 zram_generator::config[1482]: No configuration found.
Dec 13 01:57:10.281593 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (1455)
Dec 13 01:57:10.303488 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1439)
Dec 13 01:57:10.321878 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2
Dec 13 01:57:10.321942 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1439)
Dec 13 01:57:10.321959 kernel: ACPI: button: Sleep Button [SLPB]
Dec 13 01:57:10.388497 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Dec 13 01:57:10.424483 kernel: IPMI message handler: version 39.2
Dec 13 01:57:10.424520 kernel: ACPI: button: Power Button [PWRF]
Dec 13 01:57:10.448982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:57:10.459498 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 01:57:10.476487 kernel: ipmi device interface
Dec 13 01:57:10.476558 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface
Dec 13 01:57:10.514487 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface
Dec 13 01:57:10.514738 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set
Dec 13 01:57:10.570207 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt
Dec 13 01:57:10.570361 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI)
Dec 13 01:57:10.524967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM.
Dec 13 01:57:10.548683 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped.
Dec 13 01:57:10.548708 systemd[1]: Reloading finished in 312 ms.
Dec 13 01:57:10.578479 kernel: iTCO_vendor_support: vendor-support=0
Dec 13 01:57:10.578504 kernel: ipmi_si: IPMI System Interface driver
Dec 13 01:57:10.611136 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS
Dec 13 01:57:10.657290 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0
Dec 13 01:57:10.657305 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine
Dec 13 01:57:10.657322 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI
Dec 13 01:57:10.727818 kernel: ipmi_si IPI0001:00: ipmi_platform: [io  0x0ca2] regsize 1 spacing 1 irq 0
Dec 13 01:57:10.727899 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI
Dec 13 01:57:10.727966 kernel: ipmi_si: Adding ACPI-specified kcs state machine
Dec 13 01:57:10.727978 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0
Dec 13 01:57:10.780575 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400)
Dec 13 01:57:10.809048 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed.
Dec 13 01:57:10.809152 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
Dec 13 01:57:10.809222 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20)
Dec 13 01:57:10.894720 kernel: intel_rapl_common: Found RAPL domain package
Dec 13 01:57:10.894766 kernel: intel_rapl_common: Found RAPL domain core
Dec 13 01:57:10.910659 kernel: intel_rapl_common: Found RAPL domain dram
Dec 13 01:57:10.921482 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized
Dec 13 01:57:10.926688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Dec 13 01:57:10.954674 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Dec 13 01:57:10.956481 kernel: ipmi_ssif: IPMI SSIF Interface driver
Dec 13 01:57:10.971533 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Dec 13 01:57:11.004634 systemd[1]: Starting ensure-sysext.service...
Dec 13 01:57:11.013050 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Dec 13 01:57:11.024984 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Dec 13 01:57:11.030943 lvm[1606]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 01:57:11.037427 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Dec 13 01:57:11.048026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Dec 13 01:57:11.048578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Dec 13 01:57:11.050133 systemd[1]: Reloading requested from client PID 1605 ('systemctl') (unit ensure-sysext.service)...
Dec 13 01:57:11.050140 systemd[1]: Reloading...
Dec 13 01:57:11.086481 zram_generator::config[1637]: No configuration found.
Dec 13 01:57:11.102301 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 01:57:11.102525 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Dec 13 01:57:11.103056 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 01:57:11.103236 systemd-tmpfiles[1610]: ACLs are not supported, ignoring.
Dec 13 01:57:11.103280 systemd-tmpfiles[1610]: ACLs are not supported, ignoring.
Dec 13 01:57:11.104945 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot.
Dec 13 01:57:11.104949 systemd-tmpfiles[1610]: Skipping /boot
Dec 13 01:57:11.109191 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot.
Dec 13 01:57:11.109195 systemd-tmpfiles[1610]: Skipping /boot
Dec 13 01:57:11.142206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:57:11.195328 systemd[1]: Reloading finished in 144 ms.
Dec 13 01:57:11.222770 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Dec 13 01:57:11.234714 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Dec 13 01:57:11.245695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Dec 13 01:57:11.256692 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Dec 13 01:57:11.272012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Dec 13 01:57:11.293622 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Dec 13 01:57:11.304447 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Dec 13 01:57:11.311832 augenrules[1719]: No rules
Dec 13 01:57:11.317216 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Dec 13 01:57:11.330218 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Dec 13 01:57:11.332055 lvm[1724]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 01:57:11.341693 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 13 01:57:11.352222 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Dec 13 01:57:11.365471 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Dec 13 01:57:11.375179 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Dec 13 01:57:11.384842 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Dec 13 01:57:11.395831 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Dec 13 01:57:11.405856 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Dec 13 01:57:11.416846 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Dec 13 01:57:11.423745 systemd-networkd[1608]: lo: Link UP
Dec 13 01:57:11.423760 systemd-networkd[1608]: lo: Gained carrier
Dec 13 01:57:11.426406 systemd-networkd[1608]: bond0: netdev ready
Dec 13 01:57:11.427362 systemd-networkd[1608]: Enumeration completed
Dec 13 01:57:11.428775 systemd-networkd[1608]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:28:9c.network.
Dec 13 01:57:11.428888 systemd[1]: Started systemd-networkd.service - Network Configuration.
Dec 13 01:57:11.442717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:11.442880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:57:11.450126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:57:11.460281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:57:11.472167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:57:11.479184 systemd-resolved[1726]: Positive Trust Anchors:
Dec 13 01:57:11.479191 systemd-resolved[1726]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 01:57:11.479214 systemd-resolved[1726]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Dec 13 01:57:11.481557 systemd-resolved[1726]: Using system hostname 'ci-4081.2.1-a-5a9deb00aa'.
Dec 13 01:57:11.482563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:57:11.483419 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Dec 13 01:57:11.496185 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Dec 13 01:57:11.506672 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 01:57:11.506776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:11.507992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:57:11.508069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:57:11.520061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:57:11.520165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:57:11.533156 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:57:11.533269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:57:11.544204 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Dec 13 01:57:11.556710 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Dec 13 01:57:11.580908 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:11.581583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:57:11.602504 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up
Dec 13 01:57:11.626486 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link
Dec 13 01:57:11.627708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:57:11.628176 systemd-networkd[1608]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:28:9d.network.
Dec 13 01:57:11.639399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:57:11.664835 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:57:11.675612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:57:11.675684 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 01:57:11.675731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:11.676154 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Dec 13 01:57:11.685823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:57:11.685896 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:57:11.696914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:57:11.696987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:57:11.707812 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:57:11.707894 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:57:11.722004 systemd[1]: Reached target network.target - Network.
Dec 13 01:57:11.730749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Dec 13 01:57:11.741663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:11.742012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Dec 13 01:57:11.759120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Dec 13 01:57:11.772515 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Dec 13 01:57:11.803544 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up
Dec 13 01:57:11.835539 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link
Dec 13 01:57:11.835916 systemd-networkd[1608]: bond0: Configuring with /etc/systemd/network/05-bond0.network.
Dec 13 01:57:11.837703 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Dec 13 01:57:11.838045 systemd-networkd[1608]: enp1s0f0np0: Link UP
Dec 13 01:57:11.838602 systemd-networkd[1608]: enp1s0f0np0: Gained carrier
Dec 13 01:57:11.857523 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
Dec 13 01:57:11.857946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Dec 13 01:57:11.869836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:57:11.869979 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 01:57:11.870065 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 01:57:11.870791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 01:57:11.870864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Dec 13 01:57:11.871055 systemd-networkd[1608]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:28:9c.network.
Dec 13 01:57:11.871197 systemd-networkd[1608]: enp1s0f1np1: Link UP
Dec 13 01:57:11.871343 systemd-networkd[1608]: enp1s0f1np1: Gained carrier
Dec 13 01:57:11.890650 systemd-networkd[1608]: bond0: Link UP
Dec 13 01:57:11.890810 systemd-networkd[1608]: bond0: Gained carrier
Dec 13 01:57:11.891819 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:57:11.891890 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Dec 13 01:57:11.901752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:57:11.901820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Dec 13 01:57:11.912766 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 01:57:11.912832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Dec 13 01:57:11.923495 systemd[1]: Finished ensure-sysext.service.
Dec 13 01:57:11.932891 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:57:11.932922 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 01:57:11.942613 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Dec 13 01:57:11.969916 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex
Dec 13 01:57:11.969941 kernel: bond0: active interface up!
Dec 13 01:57:12.002379 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Dec 13 01:57:12.013620 systemd[1]: Reached target sysinit.target - System Initialization.
Dec 13 01:57:12.023612 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Dec 13 01:57:12.034565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Dec 13 01:57:12.045556 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Dec 13 01:57:12.056551 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 01:57:12.056565 systemd[1]: Reached target paths.target - Path Units.
Dec 13 01:57:12.064558 systemd[1]: Reached target time-set.target - System Time Set.
Dec 13 01:57:12.074624 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Dec 13 01:57:12.084594 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Dec 13 01:57:12.103513 systemd[1]: Reached target timers.target - Timer Units.
Dec 13 01:57:12.111503 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex
Dec 13 01:57:12.120192 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Dec 13 01:57:12.130240 systemd[1]: Starting docker.socket - Docker Socket for the API...
Dec 13 01:57:12.140177 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Dec 13 01:57:12.149842 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Dec 13 01:57:12.159603 systemd[1]: Reached target sockets.target - Socket Units.
Dec 13 01:57:12.169553 systemd[1]: Reached target basic.target - Basic System.
Dec 13 01:57:12.177571 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Dec 13 01:57:12.177586 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Dec 13 01:57:12.186572 systemd[1]: Starting containerd.service - containerd container runtime...
Dec 13 01:57:12.197234 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Dec 13 01:57:12.207250 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Dec 13 01:57:12.216157 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Dec 13 01:57:12.220043 coreos-metadata[1777]: Dec 13 01:57:12.219 INFO Fetching https://metadata.packet.net/metadata: Attempt #1
Dec 13 01:57:12.226243 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Dec 13 01:57:12.228199 jq[1781]: false
Dec 13 01:57:12.230802 dbus-daemon[1778]: [system] SELinux support is enabled
Dec 13 01:57:12.235607 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Dec 13 01:57:12.236233 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Dec 13 01:57:12.243538 extend-filesystems[1783]: Found loop4
Dec 13 01:57:12.243538 extend-filesystems[1783]: Found loop5
Dec 13 01:57:12.299529 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks
Dec 13 01:57:12.299546 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (1546)
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found loop6
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found loop7
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sda
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb1
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb2
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb3
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found usr
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb4
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb6
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb7
Dec 13 01:57:12.299564 extend-filesystems[1783]: Found sdb9
Dec 13 01:57:12.299564 extend-filesystems[1783]: Checking size of /dev/sdb9
Dec 13 01:57:12.299564 extend-filesystems[1783]: Resized partition /dev/sdb9
Dec 13 01:57:12.246325 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Dec 13 01:57:12.445643 extend-filesystems[1791]: resize2fs 1.47.1 (20-May-2024)
Dec 13 01:57:12.300386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Dec 13 01:57:12.315152 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Dec 13 01:57:12.354606 systemd[1]: Starting systemd-logind.service - User Login Management...
Dec 13 01:57:12.375018 systemd[1]: Starting tcsd.service - TCG Core Services Daemon...
Dec 13 01:57:12.400909 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 01:57:12.469952 update_engine[1808]: I20241213 01:57:12.430155  1808 main.cc:92] Flatcar Update Engine starting
Dec 13 01:57:12.469952 update_engine[1808]: I20241213 01:57:12.430775  1808 update_check_scheduler.cc:74] Next update check in 3m2s
Dec 13 01:57:12.401313 systemd[1]: Starting update-engine.service - Update Engine...
Dec 13 01:57:12.470167 jq[1809]: true
Dec 13 01:57:12.408545 systemd-logind[1803]: Watching system buttons on /dev/input/event3 (Power Button)
Dec 13 01:57:12.408554 systemd-logind[1803]: Watching system buttons on /dev/input/event2 (Sleep Button)
Dec 13 01:57:12.408563 systemd-logind[1803]: Watching system buttons on /dev/input/event0 (HID 0557:2419)
Dec 13 01:57:12.408763 systemd-logind[1803]: New seat seat0.
Dec 13 01:57:12.415140 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Dec 13 01:57:12.422926 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Dec 13 01:57:12.461748 systemd[1]: Started systemd-logind.service - User Login Management.
Dec 13 01:57:12.484689 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 01:57:12.484804 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Dec 13 01:57:12.485007 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 01:57:12.485098 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Dec 13 01:57:12.486018 sshd_keygen[1807]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 01:57:12.496001 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 01:57:12.496104 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Dec 13 01:57:12.506695 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Dec 13 01:57:12.519418 (ntainerd)[1821]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Dec 13 01:57:12.520955 jq[1820]: true
Dec 13 01:57:12.523183 dbus-daemon[1778]: [system] Successfully activated service 'org.freedesktop.systemd1'
Dec 13 01:57:12.524629 tar[1818]: linux-amd64/helm
Dec 13 01:57:12.530583 systemd[1]: tcsd.service: Skipped due to 'exec-condition'.
Dec 13 01:57:12.530680 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped.
Dec 13 01:57:12.533853 systemd[1]: Started update-engine.service - Update Engine.
Dec 13 01:57:12.544803 systemd[1]: Starting issuegen.service - Generate /run/issue...
Dec 13 01:57:12.553547 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 01:57:12.553644 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Dec 13 01:57:12.565628 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 01:57:12.565763 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Dec 13 01:57:12.582362 bash[1848]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 01:57:12.584681 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Dec 13 01:57:12.596463 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Dec 13 01:57:12.608856 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 01:57:12.608948 systemd[1]: Finished issuegen.service - Generate /run/issue.
Dec 13 01:57:12.611425 locksmithd[1856]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 01:57:12.627647 systemd[1]: Starting sshkeys.service...
Dec 13 01:57:12.635232 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Dec 13 01:57:12.647508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Dec 13 01:57:12.659307 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Dec 13 01:57:12.670839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Dec 13 01:57:12.681710 coreos-metadata[1871]: Dec 13 01:57:12.681 INFO Fetching https://metadata.packet.net/metadata: Attempt #1
Dec 13 01:57:12.695742 systemd[1]: Started getty@tty1.service - Getty on tty1.
Dec 13 01:57:12.703236 containerd[1821]: time="2024-12-13T01:57:12.703193678Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Dec 13 01:57:12.704395 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1.
Dec 13 01:57:12.713734 systemd[1]: Reached target getty.target - Login Prompts.
Dec 13 01:57:12.715705 containerd[1821]: time="2024-12-13T01:57:12.715687161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716414 containerd[1821]: time="2024-12-13T01:57:12.716398833Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716449 containerd[1821]: time="2024-12-13T01:57:12.716414796Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 01:57:12.716449 containerd[1821]: time="2024-12-13T01:57:12.716424084Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 01:57:12.716522 containerd[1821]: time="2024-12-13T01:57:12.716514722Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Dec 13 01:57:12.716549 containerd[1821]: time="2024-12-13T01:57:12.716525489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716567 containerd[1821]: time="2024-12-13T01:57:12.716557964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716585 containerd[1821]: time="2024-12-13T01:57:12.716567310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716669 containerd[1821]: time="2024-12-13T01:57:12.716659210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716685 containerd[1821]: time="2024-12-13T01:57:12.716669124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716685 containerd[1821]: time="2024-12-13T01:57:12.716676723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716685 containerd[1821]: time="2024-12-13T01:57:12.716682027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716735 containerd[1821]: time="2024-12-13T01:57:12.716723485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716840 containerd[1821]: time="2024-12-13T01:57:12.716831459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716893 containerd[1821]: time="2024-12-13T01:57:12.716885375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 01:57:12.716912 containerd[1821]: time="2024-12-13T01:57:12.716893860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 01:57:12.716942 containerd[1821]: time="2024-12-13T01:57:12.716934180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 01:57:12.716966 containerd[1821]: time="2024-12-13T01:57:12.716959763Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 01:57:12.726975 containerd[1821]: time="2024-12-13T01:57:12.726945906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 01:57:12.727010 containerd[1821]: time="2024-12-13T01:57:12.726980718Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 01:57:12.727010 containerd[1821]: time="2024-12-13T01:57:12.726991808Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Dec 13 01:57:12.727010 containerd[1821]: time="2024-12-13T01:57:12.727001303Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Dec 13 01:57:12.727062 containerd[1821]: time="2024-12-13T01:57:12.727009607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 01:57:12.727083 containerd[1821]: time="2024-12-13T01:57:12.727075926Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 01:57:12.727220 containerd[1821]: time="2024-12-13T01:57:12.727207767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 01:57:12.727286 containerd[1821]: time="2024-12-13T01:57:12.727278155Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Dec 13 01:57:12.727302 containerd[1821]: time="2024-12-13T01:57:12.727288847Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Dec 13 01:57:12.727302 containerd[1821]: time="2024-12-13T01:57:12.727296441Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Dec 13 01:57:12.727333 containerd[1821]: time="2024-12-13T01:57:12.727305071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727333 containerd[1821]: time="2024-12-13T01:57:12.727312691Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727333 containerd[1821]: time="2024-12-13T01:57:12.727319761Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727333 containerd[1821]: time="2024-12-13T01:57:12.727327462Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727335906Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727343796Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727350918Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727357432Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727368897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727376438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727385161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727393 containerd[1821]: time="2024-12-13T01:57:12.727392883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727400372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727407574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727414599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727421675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727428522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727439747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727447074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727453607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727460281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727468324Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727484122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727491595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727521 containerd[1821]: time="2024-12-13T01:57:12.727497410Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 01:57:12.727855 containerd[1821]: time="2024-12-13T01:57:12.727846444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 01:57:12.727872 containerd[1821]: time="2024-12-13T01:57:12.727859081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Dec 13 01:57:12.727872 containerd[1821]: time="2024-12-13T01:57:12.727867120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 01:57:12.727905 containerd[1821]: time="2024-12-13T01:57:12.727874665Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Dec 13 01:57:12.727905 containerd[1821]: time="2024-12-13T01:57:12.727880447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.727905 containerd[1821]: time="2024-12-13T01:57:12.727888508Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Dec 13 01:57:12.727905 containerd[1821]: time="2024-12-13T01:57:12.727894634Z" level=info msg="NRI interface is disabled by configuration."
Dec 13 01:57:12.727905 containerd[1821]: time="2024-12-13T01:57:12.727900184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 01:57:12.728086 containerd[1821]: time="2024-12-13T01:57:12.728058131Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 01:57:12.728175 containerd[1821]: time="2024-12-13T01:57:12.728091316Z" level=info msg="Connect containerd service"
Dec 13 01:57:12.728175 containerd[1821]: time="2024-12-13T01:57:12.728109854Z" level=info msg="using legacy CRI server"
Dec 13 01:57:12.728175 containerd[1821]: time="2024-12-13T01:57:12.728114265Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 13 01:57:12.728175 containerd[1821]: time="2024-12-13T01:57:12.728162688Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 01:57:12.728456 containerd[1821]: time="2024-12-13T01:57:12.728445961Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 01:57:12.728508 containerd[1821]: time="2024-12-13T01:57:12.728489341Z" level=info msg="Start subscribing containerd event"
Dec 13 01:57:12.728525 containerd[1821]: time="2024-12-13T01:57:12.728518878Z" level=info msg="Start recovering state"
Dec 13 01:57:12.728556 containerd[1821]: time="2024-12-13T01:57:12.728550767Z" level=info msg="Start event monitor"
Dec 13 01:57:12.728573 containerd[1821]: time="2024-12-13T01:57:12.728557711Z" level=info msg="Start snapshots syncer"
Dec 13 01:57:12.728573 containerd[1821]: time="2024-12-13T01:57:12.728562724Z" level=info msg="Start cni network conf syncer for default"
Dec 13 01:57:12.728573 containerd[1821]: time="2024-12-13T01:57:12.728566694Z" level=info msg="Start streaming server"
Dec 13 01:57:12.728841 containerd[1821]: time="2024-12-13T01:57:12.728811032Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 01:57:12.728968 containerd[1821]: time="2024-12-13T01:57:12.728872704Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 01:57:12.728968 containerd[1821]: time="2024-12-13T01:57:12.728928162Z" level=info msg="containerd successfully booted in 0.026671s"
Dec 13 01:57:12.728978 systemd[1]: Started containerd.service - containerd container runtime.
Dec 13 01:57:12.793395 tar[1818]: linux-amd64/LICENSE
Dec 13 01:57:12.793458 tar[1818]: linux-amd64/README.md
Dec 13 01:57:12.802526 kernel: EXT4-fs (sdb9): resized filesystem to 116605649
Dec 13 01:57:12.826346 extend-filesystems[1791]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required
Dec 13 01:57:12.826346 extend-filesystems[1791]: old_desc_blocks = 1, new_desc_blocks = 56
Dec 13 01:57:12.826346 extend-filesystems[1791]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long.
Dec 13 01:57:12.867563 extend-filesystems[1783]: Resized filesystem in /dev/sdb9
Dec 13 01:57:12.826743 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 01:57:12.826837 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Dec 13 01:57:12.875911 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Dec 13 01:57:12.976681 systemd-networkd[1608]: bond0: Gained IPv6LL
Dec 13 01:57:13.046682 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Dec 13 01:57:13.059598 systemd[1]: Reached target network-online.target - Network is Online.
Dec 13 01:57:13.081643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:13.092325 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Dec 13 01:57:13.112816 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Dec 13 01:57:13.715164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:13.726073 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 01:57:14.233088 kubelet[1908]: E1213 01:57:14.232995    1908 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:57:14.234241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:57:14.234318 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:57:14.503997 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2
Dec 13 01:57:14.504128 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity
Dec 13 01:57:15.703725 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Dec 13 01:57:15.721828 systemd[1]: Started sshd@0-147.28.180.91:22-147.75.109.163:53386.service - OpenSSH per-connection server daemon (147.75.109.163:53386).
Dec 13 01:57:15.773632 sshd[1929]: Accepted publickey for core from 147.75.109.163 port 53386 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:15.774825 sshd[1929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:15.780471 systemd-logind[1803]: New session 1 of user core.
Dec 13 01:57:15.781328 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Dec 13 01:57:15.808813 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Dec 13 01:57:15.828969 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Dec 13 01:57:15.856808 systemd[1]: Starting user@500.service - User Manager for UID 500...
Dec 13 01:57:15.867761 (systemd)[1933]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 01:57:15.941955 systemd[1933]: Queued start job for default target default.target.
Dec 13 01:57:15.951149 systemd[1933]: Created slice app.slice - User Application Slice.
Dec 13 01:57:15.951163 systemd[1933]: Reached target paths.target - Paths.
Dec 13 01:57:15.951171 systemd[1933]: Reached target timers.target - Timers.
Dec 13 01:57:15.951812 systemd[1933]: Starting dbus.socket - D-Bus User Message Bus Socket...
Dec 13 01:57:15.957278 systemd[1933]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Dec 13 01:57:15.957306 systemd[1933]: Reached target sockets.target - Sockets.
Dec 13 01:57:15.957314 systemd[1933]: Reached target basic.target - Basic System.
Dec 13 01:57:15.957335 systemd[1933]: Reached target default.target - Main User Target.
Dec 13 01:57:15.957350 systemd[1933]: Startup finished in 85ms.
Dec 13 01:57:15.957460 systemd[1]: Started user@500.service - User Manager for UID 500.
Dec 13 01:57:15.969496 systemd[1]: Started session-1.scope - Session 1 of User core.
Dec 13 01:57:16.048584 systemd[1]: Started sshd@1-147.28.180.91:22-147.75.109.163:50648.service - OpenSSH per-connection server daemon (147.75.109.163:50648).
Dec 13 01:57:16.088403 sshd[1944]: Accepted publickey for core from 147.75.109.163 port 50648 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:16.090884 sshd[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:16.098621 systemd-logind[1803]: New session 2 of user core.
Dec 13 01:57:16.108872 systemd[1]: Started session-2.scope - Session 2 of User core.
Dec 13 01:57:16.165363 sshd[1944]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:16.177034 systemd[1]: sshd@1-147.28.180.91:22-147.75.109.163:50648.service: Deactivated successfully.
Dec 13 01:57:16.177747 systemd[1]: session-2.scope: Deactivated successfully.
Dec 13 01:57:16.178285 systemd-logind[1803]: Session 2 logged out. Waiting for processes to exit.
Dec 13 01:57:16.178975 systemd[1]: Started sshd@2-147.28.180.91:22-147.75.109.163:50662.service - OpenSSH per-connection server daemon (147.75.109.163:50662).
Dec 13 01:57:16.190219 systemd-logind[1803]: Removed session 2.
Dec 13 01:57:16.218625 sshd[1951]: Accepted publickey for core from 147.75.109.163 port 50662 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:16.219282 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:16.222033 systemd-logind[1803]: New session 3 of user core.
Dec 13 01:57:16.222816 systemd[1]: Started session-3.scope - Session 3 of User core.
Dec 13 01:57:16.286151 sshd[1951]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:16.288694 systemd[1]: sshd@2-147.28.180.91:22-147.75.109.163:50662.service: Deactivated successfully.
Dec 13 01:57:16.290252 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 01:57:16.291423 systemd-logind[1803]: Session 3 logged out. Waiting for processes to exit.
Dec 13 01:57:16.292433 systemd-logind[1803]: Removed session 3.
Dec 13 01:57:18.128162 systemd-resolved[1726]: Clock change detected. Flushing caches.
Dec 13 01:57:18.128362 systemd-timesyncd[1771]: Contacted time server 23.186.168.2:123 (0.flatcar.pool.ntp.org).
Dec 13 01:57:18.128500 systemd-timesyncd[1771]: Initial clock synchronization to Fri 2024-12-13 01:57:18.127988 UTC.
Dec 13 01:57:18.580736 login[1876]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Dec 13 01:57:18.581210 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Dec 13 01:57:18.583271 systemd-logind[1803]: New session 5 of user core.
Dec 13 01:57:18.594035 systemd[1]: Started session-5.scope - Session 5 of User core.
Dec 13 01:57:18.595534 systemd-logind[1803]: New session 4 of user core.
Dec 13 01:57:18.596262 systemd[1]: Started session-4.scope - Session 4 of User core.
Dec 13 01:57:18.726065 coreos-metadata[1871]: Dec 13 01:57:18.725 INFO Fetch successful
Dec 13 01:57:18.763940 unknown[1871]: wrote ssh authorized keys file for user: core
Dec 13 01:57:18.791529 update-ssh-keys[1983]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 01:57:18.791832 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Dec 13 01:57:18.792553 systemd[1]: Finished sshkeys.service.
Dec 13 01:57:18.877113 coreos-metadata[1777]: Dec 13 01:57:18.877 INFO Fetch successful
Dec 13 01:57:18.951285 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Dec 13 01:57:18.952390 systemd[1]: Starting packet-phone-home.service - Report Success to Packet...
Dec 13 01:57:19.945841 systemd[1]: Finished packet-phone-home.service - Report Success to Packet.
Dec 13 01:57:19.948440 systemd[1]: Reached target multi-user.target - Multi-User System.
Dec 13 01:57:19.949103 systemd[1]: Startup finished in 2.688s (kernel) + 21.685s (initrd) + 12.553s (userspace) = 36.928s.
Dec 13 01:57:25.089478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:57:25.109900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:25.324380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:25.330051 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 01:57:25.357809 kubelet[2002]: E1213 01:57:25.357650    2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:57:25.359987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:57:25.360061 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:57:27.154008 systemd[1]: Started sshd@3-147.28.180.91:22-147.75.109.163:57516.service - OpenSSH per-connection server daemon (147.75.109.163:57516).
Dec 13 01:57:27.183055 sshd[2019]: Accepted publickey for core from 147.75.109.163 port 57516 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:27.183702 sshd[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:27.186224 systemd-logind[1803]: New session 6 of user core.
Dec 13 01:57:27.200878 systemd[1]: Started session-6.scope - Session 6 of User core.
Dec 13 01:57:27.253257 sshd[2019]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:27.273306 systemd[1]: sshd@3-147.28.180.91:22-147.75.109.163:57516.service: Deactivated successfully.
Dec 13 01:57:27.274207 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 01:57:27.274885 systemd-logind[1803]: Session 6 logged out. Waiting for processes to exit.
Dec 13 01:57:27.275424 systemd[1]: Started sshd@4-147.28.180.91:22-147.75.109.163:57520.service - OpenSSH per-connection server daemon (147.75.109.163:57520).
Dec 13 01:57:27.275929 systemd-logind[1803]: Removed session 6.
Dec 13 01:57:27.306057 sshd[2026]: Accepted publickey for core from 147.75.109.163 port 57520 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:27.306754 sshd[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:27.309316 systemd-logind[1803]: New session 7 of user core.
Dec 13 01:57:27.326883 systemd[1]: Started session-7.scope - Session 7 of User core.
Dec 13 01:57:27.378011 sshd[2026]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:27.389199 systemd[1]: sshd@4-147.28.180.91:22-147.75.109.163:57520.service: Deactivated successfully.
Dec 13 01:57:27.389931 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 01:57:27.390639 systemd-logind[1803]: Session 7 logged out. Waiting for processes to exit.
Dec 13 01:57:27.391235 systemd[1]: Started sshd@5-147.28.180.91:22-147.75.109.163:57536.service - OpenSSH per-connection server daemon (147.75.109.163:57536).
Dec 13 01:57:27.391719 systemd-logind[1803]: Removed session 7.
Dec 13 01:57:27.433667 sshd[2034]: Accepted publickey for core from 147.75.109.163 port 57536 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:27.434624 sshd[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:27.437902 systemd-logind[1803]: New session 8 of user core.
Dec 13 01:57:27.449843 systemd[1]: Started session-8.scope - Session 8 of User core.
Dec 13 01:57:27.504848 sshd[2034]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:27.515197 systemd[1]: sshd@5-147.28.180.91:22-147.75.109.163:57536.service: Deactivated successfully.
Dec 13 01:57:27.515967 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 01:57:27.516649 systemd-logind[1803]: Session 8 logged out. Waiting for processes to exit.
Dec 13 01:57:27.517302 systemd[1]: Started sshd@6-147.28.180.91:22-147.75.109.163:57546.service - OpenSSH per-connection server daemon (147.75.109.163:57546).
Dec 13 01:57:27.517840 systemd-logind[1803]: Removed session 8.
Dec 13 01:57:27.550096 sshd[2041]: Accepted publickey for core from 147.75.109.163 port 57546 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:27.550892 sshd[2041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:27.553693 systemd-logind[1803]: New session 9 of user core.
Dec 13 01:57:27.570945 systemd[1]: Started session-9.scope - Session 9 of User core.
Dec 13 01:57:27.635662 sudo[2044]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Dec 13 01:57:27.635812 sudo[2044]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 01:57:27.650348 sudo[2044]: pam_unix(sudo:session): session closed for user root
Dec 13 01:57:27.651417 sshd[2041]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:27.675057 systemd[1]: sshd@6-147.28.180.91:22-147.75.109.163:57546.service: Deactivated successfully.
Dec 13 01:57:27.676360 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 01:57:27.677554 systemd-logind[1803]: Session 9 logged out. Waiting for processes to exit.
Dec 13 01:57:27.678676 systemd[1]: Started sshd@7-147.28.180.91:22-147.75.109.163:57554.service - OpenSSH per-connection server daemon (147.75.109.163:57554).
Dec 13 01:57:27.679710 systemd-logind[1803]: Removed session 9.
Dec 13 01:57:27.743988 sshd[2049]: Accepted publickey for core from 147.75.109.163 port 57554 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:27.745126 sshd[2049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:27.748932 systemd-logind[1803]: New session 10 of user core.
Dec 13 01:57:27.773044 systemd[1]: Started session-10.scope - Session 10 of User core.
Dec 13 01:57:27.828593 sudo[2053]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Dec 13 01:57:27.828754 sudo[2053]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 01:57:27.830827 sudo[2053]: pam_unix(sudo:session): session closed for user root
Dec 13 01:57:27.833551 sudo[2052]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Dec 13 01:57:27.833720 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 01:57:27.846942 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Dec 13 01:57:27.848006 auditctl[2056]: No rules
Dec 13 01:57:27.848240 systemd[1]: audit-rules.service: Deactivated successfully.
Dec 13 01:57:27.848364 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Dec 13 01:57:27.849917 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Dec 13 01:57:27.865605 augenrules[2074]: No rules
Dec 13 01:57:27.865977 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Dec 13 01:57:27.866535 sudo[2052]: pam_unix(sudo:session): session closed for user root
Dec 13 01:57:27.867581 sshd[2049]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:27.884620 systemd[1]: sshd@7-147.28.180.91:22-147.75.109.163:57554.service: Deactivated successfully.
Dec 13 01:57:27.885567 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 01:57:27.886492 systemd-logind[1803]: Session 10 logged out. Waiting for processes to exit.
Dec 13 01:57:27.887282 systemd[1]: Started sshd@8-147.28.180.91:22-147.75.109.163:57562.service - OpenSSH per-connection server daemon (147.75.109.163:57562).
Dec 13 01:57:27.887933 systemd-logind[1803]: Removed session 10.
Dec 13 01:57:27.944082 sshd[2082]: Accepted publickey for core from 147.75.109.163 port 57562 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 01:57:27.945268 sshd[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 01:57:27.949350 systemd-logind[1803]: New session 11 of user core.
Dec 13 01:57:27.963882 systemd[1]: Started session-11.scope - Session 11 of User core.
Dec 13 01:57:28.017984 sudo[2085]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 01:57:28.018137 sudo[2085]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Dec 13 01:57:28.409994 systemd[1]: Starting docker.service - Docker Application Container Engine...
Dec 13 01:57:28.410088 (dockerd)[2110]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Dec 13 01:57:28.666069 dockerd[2110]: time="2024-12-13T01:57:28.665978988Z" level=info msg="Starting up"
Dec 13 01:57:28.738850 dockerd[2110]: time="2024-12-13T01:57:28.738824206Z" level=info msg="Loading containers: start."
Dec 13 01:57:28.834623 kernel: Initializing XFRM netlink socket
Dec 13 01:57:28.886184 systemd-networkd[1608]: docker0: Link UP
Dec 13 01:57:28.903661 dockerd[2110]: time="2024-12-13T01:57:28.903607111Z" level=info msg="Loading containers: done."
Dec 13 01:57:28.911792 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3293008160-merged.mount: Deactivated successfully.
Dec 13 01:57:28.912641 dockerd[2110]: time="2024-12-13T01:57:28.912624485Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Dec 13 01:57:28.912682 dockerd[2110]: time="2024-12-13T01:57:28.912673915Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Dec 13 01:57:28.912765 dockerd[2110]: time="2024-12-13T01:57:28.912727901Z" level=info msg="Daemon has completed initialization"
Dec 13 01:57:28.929098 dockerd[2110]: time="2024-12-13T01:57:28.929009638Z" level=info msg="API listen on /run/docker.sock"
Dec 13 01:57:28.929172 systemd[1]: Started docker.service - Docker Application Container Engine.
Dec 13 01:57:29.709582 containerd[1821]: time="2024-12-13T01:57:29.709539662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\""
Dec 13 01:57:30.296306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789138810.mount: Deactivated successfully.
Dec 13 01:57:31.231860 containerd[1821]: time="2024-12-13T01:57:31.231835456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:31.232063 containerd[1821]: time="2024-12-13T01:57:31.231996636Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254"
Dec 13 01:57:31.232447 containerd[1821]: time="2024-12-13T01:57:31.232437673Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:31.233979 containerd[1821]: time="2024-12-13T01:57:31.233968421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:31.235076 containerd[1821]: time="2024-12-13T01:57:31.235033557Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.525471702s"
Dec 13 01:57:31.235076 containerd[1821]: time="2024-12-13T01:57:31.235052979Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\""
Dec 13 01:57:31.245478 containerd[1821]: time="2024-12-13T01:57:31.245457297Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\""
Dec 13 01:57:32.415652 containerd[1821]: time="2024-12-13T01:57:32.415596550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:32.415855 containerd[1821]: time="2024-12-13T01:57:32.415756317Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732"
Dec 13 01:57:32.416180 containerd[1821]: time="2024-12-13T01:57:32.416140595Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:32.417729 containerd[1821]: time="2024-12-13T01:57:32.417688382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:32.418783 containerd[1821]: time="2024-12-13T01:57:32.418739995Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.173261968s"
Dec 13 01:57:32.418783 containerd[1821]: time="2024-12-13T01:57:32.418757585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\""
Dec 13 01:57:32.430762 containerd[1821]: time="2024-12-13T01:57:32.430710989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\""
Dec 13 01:57:33.344400 containerd[1821]: time="2024-12-13T01:57:33.344344967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:33.344607 containerd[1821]: time="2024-12-13T01:57:33.344558147Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822"
Dec 13 01:57:33.344951 containerd[1821]: time="2024-12-13T01:57:33.344909766Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:33.346664 containerd[1821]: time="2024-12-13T01:57:33.346642104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:33.347224 containerd[1821]: time="2024-12-13T01:57:33.347182004Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 916.451132ms"
Dec 13 01:57:33.347224 containerd[1821]: time="2024-12-13T01:57:33.347198910Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\""
Dec 13 01:57:33.358206 containerd[1821]: time="2024-12-13T01:57:33.358178167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\""
Dec 13 01:57:34.197996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935990082.mount: Deactivated successfully.
Dec 13 01:57:34.359974 containerd[1821]: time="2024-12-13T01:57:34.359921976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:34.360175 containerd[1821]: time="2024-12-13T01:57:34.360153941Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958"
Dec 13 01:57:34.360509 containerd[1821]: time="2024-12-13T01:57:34.360468451Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:34.361374 containerd[1821]: time="2024-12-13T01:57:34.361331432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:34.361770 containerd[1821]: time="2024-12-13T01:57:34.361728644Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.00352403s"
Dec 13 01:57:34.361770 containerd[1821]: time="2024-12-13T01:57:34.361745555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\""
Dec 13 01:57:34.372738 containerd[1821]: time="2024-12-13T01:57:34.372690984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Dec 13 01:57:34.904639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554167247.mount: Deactivated successfully.
Dec 13 01:57:35.371299 containerd[1821]: time="2024-12-13T01:57:35.371272494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:35.371506 containerd[1821]: time="2024-12-13T01:57:35.371478831Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761"
Dec 13 01:57:35.371901 containerd[1821]: time="2024-12-13T01:57:35.371891164Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:35.373471 containerd[1821]: time="2024-12-13T01:57:35.373456133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:35.374121 containerd[1821]: time="2024-12-13T01:57:35.374078272Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.001367409s"
Dec 13 01:57:35.374121 containerd[1821]: time="2024-12-13T01:57:35.374094945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Dec 13 01:57:35.384937 containerd[1821]: time="2024-12-13T01:57:35.384918437Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Dec 13 01:57:35.587812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Dec 13 01:57:35.593914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:35.793895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:35.796974 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Dec 13 01:57:35.844590 kubelet[2478]: E1213 01:57:35.844517    2478 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 01:57:35.846954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 01:57:35.847112 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 01:57:35.978872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013571240.mount: Deactivated successfully.
Dec 13 01:57:35.980106 containerd[1821]: time="2024-12-13T01:57:35.980053516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:35.980338 containerd[1821]: time="2024-12-13T01:57:35.980290789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290"
Dec 13 01:57:35.980673 containerd[1821]: time="2024-12-13T01:57:35.980633037Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:35.981750 containerd[1821]: time="2024-12-13T01:57:35.981689828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:35.982233 containerd[1821]: time="2024-12-13T01:57:35.982186846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 597.247199ms"
Dec 13 01:57:35.982233 containerd[1821]: time="2024-12-13T01:57:35.982203569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Dec 13 01:57:35.993810 containerd[1821]: time="2024-12-13T01:57:35.993789305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Dec 13 01:57:36.525539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381423241.mount: Deactivated successfully.
Dec 13 01:57:37.594545 containerd[1821]: time="2024-12-13T01:57:37.594492165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:37.594789 containerd[1821]: time="2024-12-13T01:57:37.594632534Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625"
Dec 13 01:57:37.595173 containerd[1821]: time="2024-12-13T01:57:37.595133367Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:37.596888 containerd[1821]: time="2024-12-13T01:57:37.596844052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:57:37.597540 containerd[1821]: time="2024-12-13T01:57:37.597497587Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.603688385s"
Dec 13 01:57:37.597540 containerd[1821]: time="2024-12-13T01:57:37.597515209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Dec 13 01:57:39.230096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:39.243973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:39.253362 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-11.scope)...
Dec 13 01:57:39.253369 systemd[1]: Reloading...
Dec 13 01:57:39.303702 zram_generator::config[2747]: No configuration found.
Dec 13 01:57:39.370463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:57:39.432570 systemd[1]: Reloading finished in 179 ms.
Dec 13 01:57:39.474308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:39.475332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:39.476608 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 01:57:39.476752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:39.477530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:39.676175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:39.678395 (kubelet)[2817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Dec 13 01:57:39.701329 kubelet[2817]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:57:39.701329 kubelet[2817]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 01:57:39.701329 kubelet[2817]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:57:39.701329 kubelet[2817]: I1213 01:57:39.701323    2817 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 01:57:40.113806 kubelet[2817]: I1213 01:57:40.113761    2817 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 01:57:40.113806 kubelet[2817]: I1213 01:57:40.113774    2817 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 01:57:40.113927 kubelet[2817]: I1213 01:57:40.113884    2817 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 01:57:40.128819 kubelet[2817]: E1213 01:57:40.128788    2817 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.180.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.130733 kubelet[2817]: I1213 01:57:40.130657    2817 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 01:57:40.143384 kubelet[2817]: I1213 01:57:40.143375    2817 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 01:57:40.143533 kubelet[2817]: I1213 01:57:40.143496    2817 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 01:57:40.143617 kubelet[2817]: I1213 01:57:40.143581    2817 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 01:57:40.144019 kubelet[2817]: I1213 01:57:40.143984    2817 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 01:57:40.144019 kubelet[2817]: I1213 01:57:40.143993    2817 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 01:57:40.144064 kubelet[2817]: I1213 01:57:40.144042    2817 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:57:40.144092 kubelet[2817]: I1213 01:57:40.144086    2817 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 01:57:40.144112 kubelet[2817]: I1213 01:57:40.144094    2817 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 01:57:40.144112 kubelet[2817]: I1213 01:57:40.144108    2817 kubelet.go:312] "Adding apiserver pod source"
Dec 13 01:57:40.144147 kubelet[2817]: I1213 01:57:40.144115    2817 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 01:57:40.145502 kubelet[2817]: I1213 01:57:40.145490    2817 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Dec 13 01:57:40.145874 kubelet[2817]: W1213 01:57:40.145828    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.28.180.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-5a9deb00aa&limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.145874 kubelet[2817]: W1213 01:57:40.145835    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.145874 kubelet[2817]: E1213 01:57:40.145873    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-5a9deb00aa&limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.145941 kubelet[2817]: E1213 01:57:40.145883    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.147952 kubelet[2817]: I1213 01:57:40.147899    2817 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 01:57:40.148858 kubelet[2817]: W1213 01:57:40.148820    2817 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 01:57:40.149146 kubelet[2817]: I1213 01:57:40.149094    2817 server.go:1256] "Started kubelet"
Dec 13 01:57:40.149146 kubelet[2817]: I1213 01:57:40.149146    2817 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 01:57:40.149248 kubelet[2817]: I1213 01:57:40.149180    2817 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 01:57:40.151950 kubelet[2817]: I1213 01:57:40.151922    2817 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 01:57:40.152741 kubelet[2817]: I1213 01:57:40.152701    2817 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 01:57:40.152839 kubelet[2817]: I1213 01:57:40.152795    2817 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 01:57:40.152839 kubelet[2817]: I1213 01:57:40.152812    2817 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 01:57:40.152892 kubelet[2817]: I1213 01:57:40.152859    2817 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 01:57:40.153066 kubelet[2817]: W1213 01:57:40.153040    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.28.180.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.153116 kubelet[2817]: E1213 01:57:40.153072    2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-5a9deb00aa?timeout=10s\": dial tcp 147.28.180.91:6443: connect: connection refused" interval="200ms"
Dec 13 01:57:40.153116 kubelet[2817]: E1213 01:57:40.153077    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.153240 kubelet[2817]: I1213 01:57:40.153233    2817 factory.go:221] Registration of the systemd container factory successfully
Dec 13 01:57:40.153314 kubelet[2817]: I1213 01:57:40.153304    2817 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 01:57:40.154683 kubelet[2817]: I1213 01:57:40.154675    2817 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 01:57:40.154832 kubelet[2817]: E1213 01:57:40.154822    2817 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 01:57:40.154993 kubelet[2817]: I1213 01:57:40.154983    2817 factory.go:221] Registration of the containerd container factory successfully
Dec 13 01:57:40.155580 kubelet[2817]: E1213 01:57:40.155571    2817 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.91:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.91:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-5a9deb00aa.181099e3c9279ba9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-5a9deb00aa,UID:ci-4081.2.1-a-5a9deb00aa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-5a9deb00aa,},FirstTimestamp:2024-12-13 01:57:40.149083049 +0000 UTC m=+0.468653265,LastTimestamp:2024-12-13 01:57:40.149083049 +0000 UTC m=+0.468653265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-5a9deb00aa,}"
Dec 13 01:57:40.162624 kubelet[2817]: I1213 01:57:40.162605    2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 01:57:40.163345 kubelet[2817]: I1213 01:57:40.163308    2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 01:57:40.163345 kubelet[2817]: I1213 01:57:40.163324    2817 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 01:57:40.163345 kubelet[2817]: I1213 01:57:40.163335    2817 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 01:57:40.163425 kubelet[2817]: E1213 01:57:40.163366    2817 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 01:57:40.163580 kubelet[2817]: W1213 01:57:40.163567    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.28.180.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.163608 kubelet[2817]: E1213 01:57:40.163591    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:40.165417 kubelet[2817]: I1213 01:57:40.165392    2817 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 01:57:40.165417 kubelet[2817]: I1213 01:57:40.165415    2817 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 01:57:40.165475 kubelet[2817]: I1213 01:57:40.165424    2817 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:57:40.166258 kubelet[2817]: I1213 01:57:40.166227    2817 policy_none.go:49] "None policy: Start"
Dec 13 01:57:40.166459 kubelet[2817]: I1213 01:57:40.166453    2817 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 01:57:40.166486 kubelet[2817]: I1213 01:57:40.166463    2817 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 01:57:40.171543 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Dec 13 01:57:40.185353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Dec 13 01:57:40.187370 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Dec 13 01:57:40.199362 kubelet[2817]: I1213 01:57:40.199321    2817 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 01:57:40.199523 kubelet[2817]: I1213 01:57:40.199498    2817 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 01:57:40.200187 kubelet[2817]: E1213 01:57:40.200150    2817 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:40.258568 kubelet[2817]: I1213 01:57:40.258489    2817 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.260629 kubelet[2817]: E1213 01:57:40.260550    2817 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.91:6443/api/v1/nodes\": dial tcp 147.28.180.91:6443: connect: connection refused" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.263663 kubelet[2817]: I1213 01:57:40.263619    2817 topology_manager.go:215] "Topology Admit Handler" podUID="21d5dfccf0bcaed1f8385c09690cf0c8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.264411 kubelet[2817]: I1213 01:57:40.264382    2817 topology_manager.go:215] "Topology Admit Handler" podUID="06cf8e52546e1fdec469888e98966a71" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.265164 kubelet[2817]: I1213 01:57:40.265155    2817 topology_manager.go:215] "Topology Admit Handler" podUID="d8b43084a85231b17bc59be3e448c4fc" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.267952 systemd[1]: Created slice kubepods-burstable-pod21d5dfccf0bcaed1f8385c09690cf0c8.slice - libcontainer container kubepods-burstable-pod21d5dfccf0bcaed1f8385c09690cf0c8.slice.
Dec 13 01:57:40.291218 systemd[1]: Created slice kubepods-burstable-pod06cf8e52546e1fdec469888e98966a71.slice - libcontainer container kubepods-burstable-pod06cf8e52546e1fdec469888e98966a71.slice.
Dec 13 01:57:40.309172 systemd[1]: Created slice kubepods-burstable-podd8b43084a85231b17bc59be3e448c4fc.slice - libcontainer container kubepods-burstable-podd8b43084a85231b17bc59be3e448c4fc.slice.
Dec 13 01:57:40.354011 kubelet[2817]: E1213 01:57:40.353923    2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-5a9deb00aa?timeout=10s\": dial tcp 147.28.180.91:6443: connect: connection refused" interval="400ms"
Dec 13 01:57:40.365399 systemd[1]: Started sshd@9-147.28.180.91:22-218.92.0.204:26580.service - OpenSSH per-connection server daemon (218.92.0.204:26580).
Dec 13 01:57:40.454035 kubelet[2817]: I1213 01:57:40.453928    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21d5dfccf0bcaed1f8385c09690cf0c8-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" (UID: \"21d5dfccf0bcaed1f8385c09690cf0c8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.454275 kubelet[2817]: I1213 01:57:40.454076    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21d5dfccf0bcaed1f8385c09690cf0c8-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" (UID: \"21d5dfccf0bcaed1f8385c09690cf0c8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.454275 kubelet[2817]: I1213 01:57:40.454178    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21d5dfccf0bcaed1f8385c09690cf0c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" (UID: \"21d5dfccf0bcaed1f8385c09690cf0c8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.454275 kubelet[2817]: I1213 01:57:40.454250    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.454650 kubelet[2817]: I1213 01:57:40.454366    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8b43084a85231b17bc59be3e448c4fc-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-5a9deb00aa\" (UID: \"d8b43084a85231b17bc59be3e448c4fc\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.454650 kubelet[2817]: I1213 01:57:40.454474    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.454650 kubelet[2817]: I1213 01:57:40.454542    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.454650 kubelet[2817]: I1213 01:57:40.454602    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.455014 kubelet[2817]: I1213 01:57:40.454705    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.465019 kubelet[2817]: I1213 01:57:40.464939    2817 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.465657 kubelet[2817]: E1213 01:57:40.465559    2817 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.91:6443/api/v1/nodes\": dial tcp 147.28.180.91:6443: connect: connection refused" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.591494 containerd[1821]: time="2024-12-13T01:57:40.591362752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-5a9deb00aa,Uid:21d5dfccf0bcaed1f8385c09690cf0c8,Namespace:kube-system,Attempt:0,}"
Dec 13 01:57:40.607230 containerd[1821]: time="2024-12-13T01:57:40.607194018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-5a9deb00aa,Uid:06cf8e52546e1fdec469888e98966a71,Namespace:kube-system,Attempt:0,}"
Dec 13 01:57:40.612122 containerd[1821]: time="2024-12-13T01:57:40.612037079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-5a9deb00aa,Uid:d8b43084a85231b17bc59be3e448c4fc,Namespace:kube-system,Attempt:0,}"
Dec 13 01:57:40.755162 kubelet[2817]: E1213 01:57:40.755073    2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-5a9deb00aa?timeout=10s\": dial tcp 147.28.180.91:6443: connect: connection refused" interval="800ms"
Dec 13 01:57:40.871037 kubelet[2817]: I1213 01:57:40.870972    2817 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:40.871826 kubelet[2817]: E1213 01:57:40.871741    2817 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.91:6443/api/v1/nodes\": dial tcp 147.28.180.91:6443: connect: connection refused" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:41.031142 kubelet[2817]: W1213 01:57:41.031035    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.28.180.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-5a9deb00aa&limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:41.031142 kubelet[2817]: E1213 01:57:41.031106    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-5a9deb00aa&limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:41.086179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268552412.mount: Deactivated successfully.
Dec 13 01:57:41.087912 containerd[1821]: time="2024-12-13T01:57:41.087879719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:57:41.088608 containerd[1821]: time="2024-12-13T01:57:41.088594992Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:57:41.088801 containerd[1821]: time="2024-12-13T01:57:41.088784909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Dec 13 01:57:41.089090 containerd[1821]: time="2024-12-13T01:57:41.089077433Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:57:41.089296 containerd[1821]: time="2024-12-13T01:57:41.089280761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Dec 13 01:57:41.089729 containerd[1821]: time="2024-12-13T01:57:41.089610095Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:57:41.089950 containerd[1821]: time="2024-12-13T01:57:41.089903938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Dec 13 01:57:41.091022 containerd[1821]: time="2024-12-13T01:57:41.090982377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Dec 13 01:57:41.092558 containerd[1821]: time="2024-12-13T01:57:41.092515187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.429723ms"
Dec 13 01:57:41.094014 containerd[1821]: time="2024-12-13T01:57:41.093971969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.432049ms"
Dec 13 01:57:41.094383 containerd[1821]: time="2024-12-13T01:57:41.094343107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.081753ms"
Dec 13 01:57:41.159022 kubelet[2817]: W1213 01:57:41.158877    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:41.159022 kubelet[2817]: E1213 01:57:41.159012    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused
Dec 13 01:57:41.233187 containerd[1821]: time="2024-12-13T01:57:41.233137637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:57:41.233187 containerd[1821]: time="2024-12-13T01:57:41.233143467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:57:41.233350 containerd[1821]: time="2024-12-13T01:57:41.233097825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:57:41.233372 containerd[1821]: time="2024-12-13T01:57:41.233351905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:57:41.233372 containerd[1821]: time="2024-12-13T01:57:41.233360561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:57:41.233402 containerd[1821]: time="2024-12-13T01:57:41.233181782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:57:41.233402 containerd[1821]: time="2024-12-13T01:57:41.233386693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:57:41.233436 containerd[1821]: time="2024-12-13T01:57:41.233198861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:57:41.233436 containerd[1821]: time="2024-12-13T01:57:41.233410530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:57:41.233469 containerd[1821]: time="2024-12-13T01:57:41.233430199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:57:41.233469 containerd[1821]: time="2024-12-13T01:57:41.233443977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:57:41.233501 containerd[1821]: time="2024-12-13T01:57:41.233479464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:57:41.257841 systemd[1]: Started cri-containerd-4e03c74f9497fd3e09225ab7921c0bfc9e8c15f84342b2d6b47a33a10ed2491f.scope - libcontainer container 4e03c74f9497fd3e09225ab7921c0bfc9e8c15f84342b2d6b47a33a10ed2491f.
Dec 13 01:57:41.258591 systemd[1]: Started cri-containerd-4e6de31823095684f7ef4d4ae3bbfe1f29b08cd352e4872f80e0a1b6f24a64de.scope - libcontainer container 4e6de31823095684f7ef4d4ae3bbfe1f29b08cd352e4872f80e0a1b6f24a64de.
Dec 13 01:57:41.259550 systemd[1]: Started cri-containerd-7c142621672098a09320b7cea95f85e6c9406250874192d71889fe7eaa934929.scope - libcontainer container 7c142621672098a09320b7cea95f85e6c9406250874192d71889fe7eaa934929.
Dec 13 01:57:41.281569 containerd[1821]: time="2024-12-13T01:57:41.281499419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-5a9deb00aa,Uid:06cf8e52546e1fdec469888e98966a71,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e03c74f9497fd3e09225ab7921c0bfc9e8c15f84342b2d6b47a33a10ed2491f\""
Dec 13 01:57:41.281973 containerd[1821]: time="2024-12-13T01:57:41.281960834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-5a9deb00aa,Uid:d8b43084a85231b17bc59be3e448c4fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e6de31823095684f7ef4d4ae3bbfe1f29b08cd352e4872f80e0a1b6f24a64de\""
Dec 13 01:57:41.283061 containerd[1821]: time="2024-12-13T01:57:41.283043093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-5a9deb00aa,Uid:21d5dfccf0bcaed1f8385c09690cf0c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c142621672098a09320b7cea95f85e6c9406250874192d71889fe7eaa934929\""
Dec 13 01:57:41.283385 containerd[1821]: time="2024-12-13T01:57:41.283370255Z" level=info msg="CreateContainer within sandbox \"4e6de31823095684f7ef4d4ae3bbfe1f29b08cd352e4872f80e0a1b6f24a64de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Dec 13 01:57:41.283424 containerd[1821]: time="2024-12-13T01:57:41.283413088Z" level=info msg="CreateContainer within sandbox \"4e03c74f9497fd3e09225ab7921c0bfc9e8c15f84342b2d6b47a33a10ed2491f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Dec 13 01:57:41.284076 containerd[1821]: time="2024-12-13T01:57:41.284062416Z" level=info msg="CreateContainer within sandbox \"7c142621672098a09320b7cea95f85e6c9406250874192d71889fe7eaa934929\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Dec 13 01:57:41.289837 containerd[1821]: time="2024-12-13T01:57:41.289795705Z" level=info msg="CreateContainer within sandbox \"4e03c74f9497fd3e09225ab7921c0bfc9e8c15f84342b2d6b47a33a10ed2491f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e988b9806987e172f1fb68dabb9b8e80115132a4571de858399ccce90a73839a\""
Dec 13 01:57:41.290088 containerd[1821]: time="2024-12-13T01:57:41.290075905Z" level=info msg="StartContainer for \"e988b9806987e172f1fb68dabb9b8e80115132a4571de858399ccce90a73839a\""
Dec 13 01:57:41.291347 containerd[1821]: time="2024-12-13T01:57:41.291332528Z" level=info msg="CreateContainer within sandbox \"7c142621672098a09320b7cea95f85e6c9406250874192d71889fe7eaa934929\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c67c08bd19cc66c97ccd382005b94e2dd49c018bf60c08cd2d1d3fcea1c9d207\""
Dec 13 01:57:41.291562 containerd[1821]: time="2024-12-13T01:57:41.291551274Z" level=info msg="StartContainer for \"c67c08bd19cc66c97ccd382005b94e2dd49c018bf60c08cd2d1d3fcea1c9d207\""
Dec 13 01:57:41.291732 containerd[1821]: time="2024-12-13T01:57:41.291702414Z" level=info msg="CreateContainer within sandbox \"4e6de31823095684f7ef4d4ae3bbfe1f29b08cd352e4872f80e0a1b6f24a64de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"60ae8ee2470b7caf52afa7d370811ee402838804e9635d1b1101b5d0ff41b175\""
Dec 13 01:57:41.291898 containerd[1821]: time="2024-12-13T01:57:41.291887905Z" level=info msg="StartContainer for \"60ae8ee2470b7caf52afa7d370811ee402838804e9635d1b1101b5d0ff41b175\""
Dec 13 01:57:41.316971 systemd[1]: Started cri-containerd-e988b9806987e172f1fb68dabb9b8e80115132a4571de858399ccce90a73839a.scope - libcontainer container e988b9806987e172f1fb68dabb9b8e80115132a4571de858399ccce90a73839a.
Dec 13 01:57:41.319433 systemd[1]: Started cri-containerd-60ae8ee2470b7caf52afa7d370811ee402838804e9635d1b1101b5d0ff41b175.scope - libcontainer container 60ae8ee2470b7caf52afa7d370811ee402838804e9635d1b1101b5d0ff41b175.
Dec 13 01:57:41.320026 systemd[1]: Started cri-containerd-c67c08bd19cc66c97ccd382005b94e2dd49c018bf60c08cd2d1d3fcea1c9d207.scope - libcontainer container c67c08bd19cc66c97ccd382005b94e2dd49c018bf60c08cd2d1d3fcea1c9d207.
Dec 13 01:57:41.342553 containerd[1821]: time="2024-12-13T01:57:41.342526214Z" level=info msg="StartContainer for \"60ae8ee2470b7caf52afa7d370811ee402838804e9635d1b1101b5d0ff41b175\" returns successfully"
Dec 13 01:57:41.342553 containerd[1821]: time="2024-12-13T01:57:41.342549796Z" level=info msg="StartContainer for \"e988b9806987e172f1fb68dabb9b8e80115132a4571de858399ccce90a73839a\" returns successfully"
Dec 13 01:57:41.345804 containerd[1821]: time="2024-12-13T01:57:41.345775739Z" level=info msg="StartContainer for \"c67c08bd19cc66c97ccd382005b94e2dd49c018bf60c08cd2d1d3fcea1c9d207\" returns successfully"
Dec 13 01:57:41.673523 kubelet[2817]: I1213 01:57:41.673509    2817 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:41.957761 kubelet[2817]: E1213 01:57:41.957672    2817 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-5a9deb00aa\" not found" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:42.060335 kubelet[2817]: I1213 01:57:42.060282    2817 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:42.065464 kubelet[2817]: E1213 01:57:42.065448    2817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:42.165784 kubelet[2817]: E1213 01:57:42.165665    2817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:42.266592 kubelet[2817]: E1213 01:57:42.266368    2817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:42.367551 kubelet[2817]: E1213 01:57:42.367444    2817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:42.468650 kubelet[2817]: E1213 01:57:42.468475    2817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:42.569781 kubelet[2817]: E1213 01:57:42.569564    2817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:42.670265 kubelet[2817]: E1213 01:57:42.670147    2817 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-5a9deb00aa\" not found"
Dec 13 01:57:43.146897 kubelet[2817]: I1213 01:57:43.146792    2817 apiserver.go:52] "Watching apiserver"
Dec 13 01:57:43.154019 kubelet[2817]: I1213 01:57:43.153925    2817 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 01:57:43.192184 kubelet[2817]: W1213 01:57:43.192132    2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:43.985189 kubelet[2817]: W1213 01:57:43.985138    2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:44.839937 systemd[1]: Reloading requested from client PID 3136 ('systemctl') (unit session-11.scope)...
Dec 13 01:57:44.839951 systemd[1]: Reloading...
Dec 13 01:57:44.895684 zram_generator::config[3177]: No configuration found.
Dec 13 01:57:44.917317 kubelet[2817]: W1213 01:57:44.917301    2817 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:44.969571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 01:57:45.037006 systemd[1]: Reloading finished in 196 ms.
Dec 13 01:57:45.065464 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:45.071923 systemd[1]: kubelet.service: Deactivated successfully.
Dec 13 01:57:45.072053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:45.083960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Dec 13 01:57:45.278839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 01:57:45.282422 (kubelet)[3241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Dec 13 01:57:45.321155 kubelet[3241]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:57:45.321155 kubelet[3241]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 01:57:45.321155 kubelet[3241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 01:57:45.321484 kubelet[3241]: I1213 01:57:45.321404    3241 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 01:57:45.325036 kubelet[3241]: I1213 01:57:45.324991    3241 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 01:57:45.325036 kubelet[3241]: I1213 01:57:45.325010    3241 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 01:57:45.325221 kubelet[3241]: I1213 01:57:45.325187    3241 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 01:57:45.326518 kubelet[3241]: I1213 01:57:45.326503    3241 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Dec 13 01:57:45.329153 kubelet[3241]: I1213 01:57:45.329110    3241 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 01:57:45.338856 kubelet[3241]: I1213 01:57:45.338842    3241 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 01:57:45.338982 kubelet[3241]: I1213 01:57:45.338952    3241 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 01:57:45.339077 kubelet[3241]: I1213 01:57:45.339047    3241 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 01:57:45.339077 kubelet[3241]: I1213 01:57:45.339060    3241 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 01:57:45.339077 kubelet[3241]: I1213 01:57:45.339066    3241 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 01:57:45.339161 kubelet[3241]: I1213 01:57:45.339082    3241 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:57:45.339161 kubelet[3241]: I1213 01:57:45.339132    3241 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 01:57:45.339161 kubelet[3241]: I1213 01:57:45.339139    3241 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 01:57:45.339161 kubelet[3241]: I1213 01:57:45.339151    3241 kubelet.go:312] "Adding apiserver pod source"
Dec 13 01:57:45.339161 kubelet[3241]: I1213 01:57:45.339159    3241 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 01:57:45.339476 kubelet[3241]: I1213 01:57:45.339465    3241 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Dec 13 01:57:45.339601 kubelet[3241]: I1213 01:57:45.339594    3241 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 01:57:45.339894 kubelet[3241]: I1213 01:57:45.339884    3241 server.go:1256] "Started kubelet"
Dec 13 01:57:45.339952 kubelet[3241]: I1213 01:57:45.339942    3241 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 01:57:45.339988 kubelet[3241]: I1213 01:57:45.339955    3241 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 01:57:45.340070 kubelet[3241]: I1213 01:57:45.340062    3241 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 01:57:45.340699 kubelet[3241]: I1213 01:57:45.340690    3241 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 01:57:45.340738 kubelet[3241]: I1213 01:57:45.340723    3241 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 01:57:45.340791 kubelet[3241]: I1213 01:57:45.340775    3241 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 01:57:45.340791 kubelet[3241]: I1213 01:57:45.340776    3241 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 01:57:45.340881 kubelet[3241]: I1213 01:57:45.340873    3241 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 01:57:45.341024 kubelet[3241]: I1213 01:57:45.341016    3241 factory.go:221] Registration of the systemd container factory successfully
Dec 13 01:57:45.341090 kubelet[3241]: E1213 01:57:45.341079    3241 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 01:57:45.341090 kubelet[3241]: I1213 01:57:45.341077    3241 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 01:57:45.342657 kubelet[3241]: I1213 01:57:45.342643    3241 factory.go:221] Registration of the containerd container factory successfully
Dec 13 01:57:45.346277 kubelet[3241]: I1213 01:57:45.346258    3241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 01:57:45.346820 kubelet[3241]: I1213 01:57:45.346814    3241 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 01:57:45.346849 kubelet[3241]: I1213 01:57:45.346829    3241 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 01:57:45.346849 kubelet[3241]: I1213 01:57:45.346839    3241 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 01:57:45.346892 kubelet[3241]: E1213 01:57:45.346867    3241 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Dec 13 01:57:45.357562 kubelet[3241]: I1213 01:57:45.357518    3241 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 01:57:45.357562 kubelet[3241]: I1213 01:57:45.357529    3241 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 01:57:45.357562 kubelet[3241]: I1213 01:57:45.357537    3241 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 01:57:45.357672 kubelet[3241]: I1213 01:57:45.357623    3241 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Dec 13 01:57:45.357672 kubelet[3241]: I1213 01:57:45.357637    3241 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Dec 13 01:57:45.357672 kubelet[3241]: I1213 01:57:45.357641    3241 policy_none.go:49] "None policy: Start"
Dec 13 01:57:45.357946 kubelet[3241]: I1213 01:57:45.357906    3241 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 01:57:45.357946 kubelet[3241]: I1213 01:57:45.357921    3241 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 01:57:45.358056 kubelet[3241]: I1213 01:57:45.358022    3241 state_mem.go:75] "Updated machine memory state"
Dec 13 01:57:45.360061 kubelet[3241]: I1213 01:57:45.360052    3241 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 01:57:45.360182 kubelet[3241]: I1213 01:57:45.360175    3241 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 01:57:45.442727 kubelet[3241]: I1213 01:57:45.442710    3241 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.446230 kubelet[3241]: I1213 01:57:45.446220    3241 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.446272 kubelet[3241]: I1213 01:57:45.446260    3241 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.447115 kubelet[3241]: I1213 01:57:45.447077    3241 topology_manager.go:215] "Topology Admit Handler" podUID="06cf8e52546e1fdec469888e98966a71" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.447145 kubelet[3241]: I1213 01:57:45.447123    3241 topology_manager.go:215] "Topology Admit Handler" podUID="d8b43084a85231b17bc59be3e448c4fc" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.447145 kubelet[3241]: I1213 01:57:45.447143    3241 topology_manager.go:215] "Topology Admit Handler" podUID="21d5dfccf0bcaed1f8385c09690cf0c8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.450297 kubelet[3241]: W1213 01:57:45.450285    3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:45.450366 kubelet[3241]: E1213 01:57:45.450323    3241 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" already exists" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.450518 kubelet[3241]: W1213 01:57:45.450508    3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:45.450544 kubelet[3241]: E1213 01:57:45.450537    3241 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.1-a-5a9deb00aa\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.450544 kubelet[3241]: W1213 01:57:45.450539    3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:45.450575 kubelet[3241]: E1213 01:57:45.450563    3241 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642543 kubelet[3241]: I1213 01:57:45.642521    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642543 kubelet[3241]: I1213 01:57:45.642547    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642823 kubelet[3241]: I1213 01:57:45.642560    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642823 kubelet[3241]: I1213 01:57:45.642572    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642823 kubelet[3241]: I1213 01:57:45.642620    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06cf8e52546e1fdec469888e98966a71-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" (UID: \"06cf8e52546e1fdec469888e98966a71\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642823 kubelet[3241]: I1213 01:57:45.642654    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8b43084a85231b17bc59be3e448c4fc-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-5a9deb00aa\" (UID: \"d8b43084a85231b17bc59be3e448c4fc\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642823 kubelet[3241]: I1213 01:57:45.642707    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21d5dfccf0bcaed1f8385c09690cf0c8-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" (UID: \"21d5dfccf0bcaed1f8385c09690cf0c8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642989 kubelet[3241]: I1213 01:57:45.642749    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21d5dfccf0bcaed1f8385c09690cf0c8-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" (UID: \"21d5dfccf0bcaed1f8385c09690cf0c8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:45.642989 kubelet[3241]: I1213 01:57:45.642821    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21d5dfccf0bcaed1f8385c09690cf0c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" (UID: \"21d5dfccf0bcaed1f8385c09690cf0c8\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:46.340035 kubelet[3241]: I1213 01:57:46.339985    3241 apiserver.go:52] "Watching apiserver"
Dec 13 01:57:46.352596 kubelet[3241]: W1213 01:57:46.352583    3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:46.352693 kubelet[3241]: E1213 01:57:46.352621    3241 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-5a9deb00aa\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:46.352797 kubelet[3241]: W1213 01:57:46.352789    3241 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Dec 13 01:57:46.352835 kubelet[3241]: E1213 01:57:46.352827    3241 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.1-a-5a9deb00aa\" already exists" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:57:46.364026 kubelet[3241]: I1213 01:57:46.363972    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-5a9deb00aa" podStartSLOduration=3.363947306 podStartE2EDuration="3.363947306s" podCreationTimestamp="2024-12-13 01:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:46.360399016 +0000 UTC m=+1.074578666" watchObservedRunningTime="2024-12-13 01:57:46.363947306 +0000 UTC m=+1.078126951"
Dec 13 01:57:46.367973 kubelet[3241]: I1213 01:57:46.367934    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-5a9deb00aa" podStartSLOduration=2.367919813 podStartE2EDuration="2.367919813s" podCreationTimestamp="2024-12-13 01:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:46.364036363 +0000 UTC m=+1.078216007" watchObservedRunningTime="2024-12-13 01:57:46.367919813 +0000 UTC m=+1.082099455"
Dec 13 01:57:46.372077 kubelet[3241]: I1213 01:57:46.372066    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-5a9deb00aa" podStartSLOduration=3.372047539 podStartE2EDuration="3.372047539s" podCreationTimestamp="2024-12-13 01:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:46.367915638 +0000 UTC m=+1.082095283" watchObservedRunningTime="2024-12-13 01:57:46.372047539 +0000 UTC m=+1.086227184"
Dec 13 01:57:46.442046 kubelet[3241]: I1213 01:57:46.441950    3241 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 01:57:48.819368 sudo[2085]: pam_unix(sudo:session): session closed for user root
Dec 13 01:57:48.820330 sshd[2082]: pam_unix(sshd:session): session closed for user core
Dec 13 01:57:48.821795 systemd[1]: sshd@8-147.28.180.91:22-147.75.109.163:57562.service: Deactivated successfully.
Dec 13 01:57:48.822630 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 01:57:48.822758 systemd[1]: session-11.scope: Consumed 3.164s CPU time, 202.9M memory peak, 0B memory swap peak.
Dec 13 01:57:48.823333 systemd-logind[1803]: Session 11 logged out. Waiting for processes to exit.
Dec 13 01:57:48.823848 systemd-logind[1803]: Removed session 11.
Dec 13 01:57:59.030860 update_engine[1808]: I20241213 01:57:59.030696  1808 update_attempter.cc:509] Updating boot flags...
Dec 13 01:57:59.062622 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (3417)
Dec 13 01:57:59.088621 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (3416)
Dec 13 01:57:59.113623 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 41 scanned by (udev-worker) (3416)
Dec 13 01:57:59.360885 kubelet[3241]: I1213 01:57:59.360786    3241 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Dec 13 01:57:59.361928 containerd[1821]: time="2024-12-13T01:57:59.361561719Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 01:57:59.362598 kubelet[3241]: I1213 01:57:59.362120    3241 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Dec 13 01:58:00.231256 kubelet[3241]: I1213 01:58:00.231167    3241 topology_manager.go:215] "Topology Admit Handler" podUID="505a16e1-2bac-460a-bef1-ed2af68f4d24" podNamespace="kube-system" podName="kube-proxy-b9txf"
Dec 13 01:58:00.249114 systemd[1]: Created slice kubepods-besteffort-pod505a16e1_2bac_460a_bef1_ed2af68f4d24.slice - libcontainer container kubepods-besteffort-pod505a16e1_2bac_460a_bef1_ed2af68f4d24.slice.
Dec 13 01:58:00.251771 kubelet[3241]: I1213 01:58:00.251704    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/505a16e1-2bac-460a-bef1-ed2af68f4d24-lib-modules\") pod \"kube-proxy-b9txf\" (UID: \"505a16e1-2bac-460a-bef1-ed2af68f4d24\") " pod="kube-system/kube-proxy-b9txf"
Dec 13 01:58:00.252041 kubelet[3241]: I1213 01:58:00.251884    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bp2g\" (UniqueName: \"kubernetes.io/projected/505a16e1-2bac-460a-bef1-ed2af68f4d24-kube-api-access-6bp2g\") pod \"kube-proxy-b9txf\" (UID: \"505a16e1-2bac-460a-bef1-ed2af68f4d24\") " pod="kube-system/kube-proxy-b9txf"
Dec 13 01:58:00.252220 kubelet[3241]: I1213 01:58:00.252054    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/505a16e1-2bac-460a-bef1-ed2af68f4d24-kube-proxy\") pod \"kube-proxy-b9txf\" (UID: \"505a16e1-2bac-460a-bef1-ed2af68f4d24\") " pod="kube-system/kube-proxy-b9txf"
Dec 13 01:58:00.252220 kubelet[3241]: I1213 01:58:00.252154    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/505a16e1-2bac-460a-bef1-ed2af68f4d24-xtables-lock\") pod \"kube-proxy-b9txf\" (UID: \"505a16e1-2bac-460a-bef1-ed2af68f4d24\") " pod="kube-system/kube-proxy-b9txf"
Dec 13 01:58:00.412288 kubelet[3241]: I1213 01:58:00.412250    3241 topology_manager.go:215] "Topology Admit Handler" podUID="80688a39-c535-4971-9810-65c0f426a661" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-hfdc4"
Dec 13 01:58:00.420778 systemd[1]: Created slice kubepods-besteffort-pod80688a39_c535_4971_9810_65c0f426a661.slice - libcontainer container kubepods-besteffort-pod80688a39_c535_4971_9810_65c0f426a661.slice.
Dec 13 01:58:00.454244 kubelet[3241]: I1213 01:58:00.454173    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97m6b\" (UniqueName: \"kubernetes.io/projected/80688a39-c535-4971-9810-65c0f426a661-kube-api-access-97m6b\") pod \"tigera-operator-c7ccbd65-hfdc4\" (UID: \"80688a39-c535-4971-9810-65c0f426a661\") " pod="tigera-operator/tigera-operator-c7ccbd65-hfdc4"
Dec 13 01:58:00.454493 kubelet[3241]: I1213 01:58:00.454302    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80688a39-c535-4971-9810-65c0f426a661-var-lib-calico\") pod \"tigera-operator-c7ccbd65-hfdc4\" (UID: \"80688a39-c535-4971-9810-65c0f426a661\") " pod="tigera-operator/tigera-operator-c7ccbd65-hfdc4"
Dec 13 01:58:00.564501 containerd[1821]: time="2024-12-13T01:58:00.564312676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9txf,Uid:505a16e1-2bac-460a-bef1-ed2af68f4d24,Namespace:kube-system,Attempt:0,}"
Dec 13 01:58:00.575722 containerd[1821]: time="2024-12-13T01:58:00.575683993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:00.575722 containerd[1821]: time="2024-12-13T01:58:00.575707819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:00.575722 containerd[1821]: time="2024-12-13T01:58:00.575714497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:00.575830 containerd[1821]: time="2024-12-13T01:58:00.575755341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:00.593920 systemd[1]: Started cri-containerd-0a5b7f793e44c8ad0459927b9398d3f3ff32b91af4c69029d62d132fc12fb1d0.scope - libcontainer container 0a5b7f793e44c8ad0459927b9398d3f3ff32b91af4c69029d62d132fc12fb1d0.
Dec 13 01:58:00.604188 containerd[1821]: time="2024-12-13T01:58:00.604165488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9txf,Uid:505a16e1-2bac-460a-bef1-ed2af68f4d24,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a5b7f793e44c8ad0459927b9398d3f3ff32b91af4c69029d62d132fc12fb1d0\""
Dec 13 01:58:00.605609 containerd[1821]: time="2024-12-13T01:58:00.605594380Z" level=info msg="CreateContainer within sandbox \"0a5b7f793e44c8ad0459927b9398d3f3ff32b91af4c69029d62d132fc12fb1d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 01:58:00.611196 containerd[1821]: time="2024-12-13T01:58:00.611151532Z" level=info msg="CreateContainer within sandbox \"0a5b7f793e44c8ad0459927b9398d3f3ff32b91af4c69029d62d132fc12fb1d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8be868e47f43c25d0b5bcb3db496dcc28ecf20c6b346d6043561dfdbcc11e989\""
Dec 13 01:58:00.611515 containerd[1821]: time="2024-12-13T01:58:00.611503806Z" level=info msg="StartContainer for \"8be868e47f43c25d0b5bcb3db496dcc28ecf20c6b346d6043561dfdbcc11e989\""
Dec 13 01:58:00.637920 systemd[1]: Started cri-containerd-8be868e47f43c25d0b5bcb3db496dcc28ecf20c6b346d6043561dfdbcc11e989.scope - libcontainer container 8be868e47f43c25d0b5bcb3db496dcc28ecf20c6b346d6043561dfdbcc11e989.
Dec 13 01:58:00.652021 containerd[1821]: time="2024-12-13T01:58:00.651992420Z" level=info msg="StartContainer for \"8be868e47f43c25d0b5bcb3db496dcc28ecf20c6b346d6043561dfdbcc11e989\" returns successfully"
Dec 13 01:58:00.724258 containerd[1821]: time="2024-12-13T01:58:00.724180459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-hfdc4,Uid:80688a39-c535-4971-9810-65c0f426a661,Namespace:tigera-operator,Attempt:0,}"
Dec 13 01:58:00.750166 containerd[1821]: time="2024-12-13T01:58:00.750002894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:00.750166 containerd[1821]: time="2024-12-13T01:58:00.750159671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:00.750258 containerd[1821]: time="2024-12-13T01:58:00.750169593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:00.750258 containerd[1821]: time="2024-12-13T01:58:00.750218605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:00.762944 systemd[1]: Started cri-containerd-f62380ce285ff4d7f16c97285b9490fb8acf57d9c241362b7dec54c9714a21b9.scope - libcontainer container f62380ce285ff4d7f16c97285b9490fb8acf57d9c241362b7dec54c9714a21b9.
Dec 13 01:58:00.784110 containerd[1821]: time="2024-12-13T01:58:00.784089146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-hfdc4,Uid:80688a39-c535-4971-9810-65c0f426a661,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f62380ce285ff4d7f16c97285b9490fb8acf57d9c241362b7dec54c9714a21b9\""
Dec 13 01:58:00.784797 containerd[1821]: time="2024-12-13T01:58:00.784785071Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\""
Dec 13 01:58:01.393970 kubelet[3241]: I1213 01:58:01.393951    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b9txf" podStartSLOduration=1.393916564 podStartE2EDuration="1.393916564s" podCreationTimestamp="2024-12-13 01:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:01.393881431 +0000 UTC m=+16.108061081" watchObservedRunningTime="2024-12-13 01:58:01.393916564 +0000 UTC m=+16.108096208"
Dec 13 01:58:04.644980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount462184388.mount: Deactivated successfully.
Dec 13 01:58:04.849854 containerd[1821]: time="2024-12-13T01:58:04.849803572Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:04.850071 containerd[1821]: time="2024-12-13T01:58:04.849978830Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764317"
Dec 13 01:58:04.850283 containerd[1821]: time="2024-12-13T01:58:04.850245520Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:04.851401 containerd[1821]: time="2024-12-13T01:58:04.851360871Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:04.851881 containerd[1821]: time="2024-12-13T01:58:04.851838828Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.067038247s"
Dec 13 01:58:04.851881 containerd[1821]: time="2024-12-13T01:58:04.851854966Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\""
Dec 13 01:58:04.852725 containerd[1821]: time="2024-12-13T01:58:04.852713848Z" level=info msg="CreateContainer within sandbox \"f62380ce285ff4d7f16c97285b9490fb8acf57d9c241362b7dec54c9714a21b9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Dec 13 01:58:04.856179 containerd[1821]: time="2024-12-13T01:58:04.856135885Z" level=info msg="CreateContainer within sandbox \"f62380ce285ff4d7f16c97285b9490fb8acf57d9c241362b7dec54c9714a21b9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fecda1331e96896b509b12205561d6ab5acd3a0617f7f60541c43ba2a8f592e6\""
Dec 13 01:58:04.856321 containerd[1821]: time="2024-12-13T01:58:04.856309188Z" level=info msg="StartContainer for \"fecda1331e96896b509b12205561d6ab5acd3a0617f7f60541c43ba2a8f592e6\""
Dec 13 01:58:04.877730 systemd[1]: Started cri-containerd-fecda1331e96896b509b12205561d6ab5acd3a0617f7f60541c43ba2a8f592e6.scope - libcontainer container fecda1331e96896b509b12205561d6ab5acd3a0617f7f60541c43ba2a8f592e6.
Dec 13 01:58:04.889683 containerd[1821]: time="2024-12-13T01:58:04.889658130Z" level=info msg="StartContainer for \"fecda1331e96896b509b12205561d6ab5acd3a0617f7f60541c43ba2a8f592e6\" returns successfully"
Dec 13 01:58:05.427118 kubelet[3241]: I1213 01:58:05.427096    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-hfdc4" podStartSLOduration=1.359597213 podStartE2EDuration="5.427036923s" podCreationTimestamp="2024-12-13 01:58:00 +0000 UTC" firstStartedPulling="2024-12-13 01:58:00.784582189 +0000 UTC m=+15.498761833" lastFinishedPulling="2024-12-13 01:58:04.852021899 +0000 UTC m=+19.566201543" observedRunningTime="2024-12-13 01:58:05.426999681 +0000 UTC m=+20.141179325" watchObservedRunningTime="2024-12-13 01:58:05.427036923 +0000 UTC m=+20.141216565"
Dec 13 01:58:07.859118 kubelet[3241]: I1213 01:58:07.859083    3241 topology_manager.go:215] "Topology Admit Handler" podUID="1fba57cf-c153-4318-bdb8-19f2b2d32285" podNamespace="calico-system" podName="calico-typha-5b4bcc57b4-sss9s"
Dec 13 01:58:07.866630 systemd[1]: Created slice kubepods-besteffort-pod1fba57cf_c153_4318_bdb8_19f2b2d32285.slice - libcontainer container kubepods-besteffort-pod1fba57cf_c153_4318_bdb8_19f2b2d32285.slice.
Dec 13 01:58:07.882785 kubelet[3241]: I1213 01:58:07.882764    3241 topology_manager.go:215] "Topology Admit Handler" podUID="9898e09d-0836-4458-b740-3b468b41d60c" podNamespace="calico-system" podName="calico-node-jpmxm"
Dec 13 01:58:07.886127 systemd[1]: Created slice kubepods-besteffort-pod9898e09d_0836_4458_b740_3b468b41d60c.slice - libcontainer container kubepods-besteffort-pod9898e09d_0836_4458_b740_3b468b41d60c.slice.
Dec 13 01:58:07.908027 kubelet[3241]: I1213 01:58:07.907980    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-policysync\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908027 kubelet[3241]: I1213 01:58:07.908015    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-cni-bin-dir\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908139 kubelet[3241]: I1213 01:58:07.908052    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-var-run-calico\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908139 kubelet[3241]: I1213 01:58:07.908073    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-cni-log-dir\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908139 kubelet[3241]: I1213 01:58:07.908100    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fba57cf-c153-4318-bdb8-19f2b2d32285-tigera-ca-bundle\") pod \"calico-typha-5b4bcc57b4-sss9s\" (UID: \"1fba57cf-c153-4318-bdb8-19f2b2d32285\") " pod="calico-system/calico-typha-5b4bcc57b4-sss9s"
Dec 13 01:58:07.908139 kubelet[3241]: I1213 01:58:07.908130    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-xtables-lock\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908216 kubelet[3241]: I1213 01:58:07.908144    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9898e09d-0836-4458-b740-3b468b41d60c-tigera-ca-bundle\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908216 kubelet[3241]: I1213 01:58:07.908168    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-cni-net-dir\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908216 kubelet[3241]: I1213 01:58:07.908210    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9898e09d-0836-4458-b740-3b468b41d60c-node-certs\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908274 kubelet[3241]: I1213 01:58:07.908224    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h22ws\" (UniqueName: \"kubernetes.io/projected/9898e09d-0836-4458-b740-3b468b41d60c-kube-api-access-h22ws\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908274 kubelet[3241]: I1213 01:58:07.908237    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98hf\" (UniqueName: \"kubernetes.io/projected/1fba57cf-c153-4318-bdb8-19f2b2d32285-kube-api-access-w98hf\") pod \"calico-typha-5b4bcc57b4-sss9s\" (UID: \"1fba57cf-c153-4318-bdb8-19f2b2d32285\") " pod="calico-system/calico-typha-5b4bcc57b4-sss9s"
Dec 13 01:58:07.908274 kubelet[3241]: I1213 01:58:07.908249    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-lib-modules\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908274 kubelet[3241]: I1213 01:58:07.908262    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-flexvol-driver-host\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:07.908351 kubelet[3241]: I1213 01:58:07.908290    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1fba57cf-c153-4318-bdb8-19f2b2d32285-typha-certs\") pod \"calico-typha-5b4bcc57b4-sss9s\" (UID: \"1fba57cf-c153-4318-bdb8-19f2b2d32285\") " pod="calico-system/calico-typha-5b4bcc57b4-sss9s"
Dec 13 01:58:07.908351 kubelet[3241]: I1213 01:58:07.908328    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9898e09d-0836-4458-b740-3b468b41d60c-var-lib-calico\") pod \"calico-node-jpmxm\" (UID: \"9898e09d-0836-4458-b740-3b468b41d60c\") " pod="calico-system/calico-node-jpmxm"
Dec 13 01:58:08.011392 kubelet[3241]: E1213 01:58:08.011315    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.011392 kubelet[3241]: W1213 01:58:08.011385    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.011961 kubelet[3241]: E1213 01:58:08.011471    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.012526 kubelet[3241]: E1213 01:58:08.012481    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.012526 kubelet[3241]: W1213 01:58:08.012517    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.012901 kubelet[3241]: E1213 01:58:08.012589    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.013195 kubelet[3241]: E1213 01:58:08.013147    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.013195 kubelet[3241]: W1213 01:58:08.013192    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.013528 kubelet[3241]: E1213 01:58:08.013235    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.018504 kubelet[3241]: E1213 01:58:08.018450    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.018504 kubelet[3241]: W1213 01:58:08.018505    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.018966 kubelet[3241]: E1213 01:58:08.018575    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.019765 kubelet[3241]: E1213 01:58:08.019668    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.019765 kubelet[3241]: W1213 01:58:08.019715    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.019765 kubelet[3241]: E1213 01:58:08.019765    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.028740 kubelet[3241]: I1213 01:58:08.028690    3241 topology_manager.go:215] "Topology Admit Handler" podUID="d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1" podNamespace="calico-system" podName="csi-node-driver-mb2fq"
Dec 13 01:58:08.029314 kubelet[3241]: E1213 01:58:08.029273    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb2fq" podUID="d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1"
Dec 13 01:58:08.032951 kubelet[3241]: E1213 01:58:08.032902    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.032951 kubelet[3241]: W1213 01:58:08.032946    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.033223 kubelet[3241]: E1213 01:58:08.033012    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.033587 kubelet[3241]: E1213 01:58:08.033567    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.033587 kubelet[3241]: W1213 01:58:08.033584    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.033791 kubelet[3241]: E1213 01:58:08.033607    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.099141 kubelet[3241]: E1213 01:58:08.099102    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.099141 kubelet[3241]: W1213 01:58:08.099112    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.099141 kubelet[3241]: E1213 01:58:08.099125    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.099253 kubelet[3241]: E1213 01:58:08.099247    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.099273 kubelet[3241]: W1213 01:58:08.099254    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.099273 kubelet[3241]: E1213 01:58:08.099262    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.099387 kubelet[3241]: E1213 01:58:08.099381    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.099405 kubelet[3241]: W1213 01:58:08.099386    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.099405 kubelet[3241]: E1213 01:58:08.099393    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.099488 kubelet[3241]: E1213 01:58:08.099482    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.099509 kubelet[3241]: W1213 01:58:08.099489    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.099509 kubelet[3241]: E1213 01:58:08.099499    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.099608 kubelet[3241]: E1213 01:58:08.099598    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.099608 kubelet[3241]: W1213 01:58:08.099605    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.099701 kubelet[3241]: E1213 01:58:08.099623    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.100340 kubelet[3241]: E1213 01:58:08.100333    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.100340 kubelet[3241]: W1213 01:58:08.100340    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.100395 kubelet[3241]: E1213 01:58:08.100348    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.100445 kubelet[3241]: E1213 01:58:08.100440    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.100465 kubelet[3241]: W1213 01:58:08.100445    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.100465 kubelet[3241]: E1213 01:58:08.100451    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.100539 kubelet[3241]: E1213 01:58:08.100535    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.100557 kubelet[3241]: W1213 01:58:08.100539    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.100557 kubelet[3241]: E1213 01:58:08.100545    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.100649 kubelet[3241]: E1213 01:58:08.100644    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.100649 kubelet[3241]: W1213 01:58:08.100648    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.100688 kubelet[3241]: E1213 01:58:08.100654    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.100769 kubelet[3241]: E1213 01:58:08.100764    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.100789 kubelet[3241]: W1213 01:58:08.100770    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.100789 kubelet[3241]: E1213 01:58:08.100777    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.100890 kubelet[3241]: E1213 01:58:08.100885    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.100909 kubelet[3241]: W1213 01:58:08.100889    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.100909 kubelet[3241]: E1213 01:58:08.100895    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.100976 kubelet[3241]: E1213 01:58:08.100972    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.100976 kubelet[3241]: W1213 01:58:08.100976    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101012 kubelet[3241]: E1213 01:58:08.100982    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101096 kubelet[3241]: E1213 01:58:08.101091    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101113 kubelet[3241]: W1213 01:58:08.101095    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101113 kubelet[3241]: E1213 01:58:08.101101    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101209 kubelet[3241]: E1213 01:58:08.101205    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101227 kubelet[3241]: W1213 01:58:08.101209    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101227 kubelet[3241]: E1213 01:58:08.101215    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101286 kubelet[3241]: E1213 01:58:08.101281    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101286 kubelet[3241]: W1213 01:58:08.101286    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101320 kubelet[3241]: E1213 01:58:08.101291    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101362 kubelet[3241]: E1213 01:58:08.101358    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101382 kubelet[3241]: W1213 01:58:08.101362    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101382 kubelet[3241]: E1213 01:58:08.101368    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101441 kubelet[3241]: E1213 01:58:08.101437    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101441 kubelet[3241]: W1213 01:58:08.101441    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101476 kubelet[3241]: E1213 01:58:08.101448    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101516 kubelet[3241]: E1213 01:58:08.101511    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101535 kubelet[3241]: W1213 01:58:08.101515    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101535 kubelet[3241]: E1213 01:58:08.101521    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101589 kubelet[3241]: E1213 01:58:08.101585    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101589 kubelet[3241]: W1213 01:58:08.101589    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101635 kubelet[3241]: E1213 01:58:08.101594    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.101683 kubelet[3241]: E1213 01:58:08.101678    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.101683 kubelet[3241]: W1213 01:58:08.101683    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.101719 kubelet[3241]: E1213 01:58:08.101688    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.111091 kubelet[3241]: E1213 01:58:08.111026    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.111091 kubelet[3241]: W1213 01:58:08.111034    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.111091 kubelet[3241]: E1213 01:58:08.111043    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.111091 kubelet[3241]: I1213 01:58:08.111060    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1-registration-dir\") pod \"csi-node-driver-mb2fq\" (UID: \"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1\") " pod="calico-system/csi-node-driver-mb2fq"
Dec 13 01:58:08.111239 kubelet[3241]: E1213 01:58:08.111201    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.111239 kubelet[3241]: W1213 01:58:08.111209    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.111239 kubelet[3241]: E1213 01:58:08.111221    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.111239 kubelet[3241]: I1213 01:58:08.111235    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1-kubelet-dir\") pod \"csi-node-driver-mb2fq\" (UID: \"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1\") " pod="calico-system/csi-node-driver-mb2fq"
Dec 13 01:58:08.111369 kubelet[3241]: E1213 01:58:08.111335    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.111369 kubelet[3241]: W1213 01:58:08.111342    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.111369 kubelet[3241]: E1213 01:58:08.111351    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.111369 kubelet[3241]: I1213 01:58:08.111362    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1-varrun\") pod \"csi-node-driver-mb2fq\" (UID: \"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1\") " pod="calico-system/csi-node-driver-mb2fq"
Dec 13 01:58:08.111463 kubelet[3241]: E1213 01:58:08.111457    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.111484 kubelet[3241]: W1213 01:58:08.111464    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.111484 kubelet[3241]: E1213 01:58:08.111480    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.111518 kubelet[3241]: I1213 01:58:08.111491    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66c4g\" (UniqueName: \"kubernetes.io/projected/d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1-kube-api-access-66c4g\") pod \"csi-node-driver-mb2fq\" (UID: \"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1\") " pod="calico-system/csi-node-driver-mb2fq"
Dec 13 01:58:08.111589 kubelet[3241]: E1213 01:58:08.111583    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.111617 kubelet[3241]: W1213 01:58:08.111590    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.111617 kubelet[3241]: E1213 01:58:08.111601    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.111651 kubelet[3241]: I1213 01:58:08.111616    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1-socket-dir\") pod \"csi-node-driver-mb2fq\" (UID: \"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1\") " pod="calico-system/csi-node-driver-mb2fq"
Dec 13 01:58:08.111822 kubelet[3241]: E1213 01:58:08.111783    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.111822 kubelet[3241]: W1213 01:58:08.111794    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.111822 kubelet[3241]: E1213 01:58:08.111809    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.111921 kubelet[3241]: E1213 01:58:08.111914    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.111942 kubelet[3241]: W1213 01:58:08.111921    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.111942 kubelet[3241]: E1213 01:58:08.111933    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112051 kubelet[3241]: E1213 01:58:08.112045    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112074 kubelet[3241]: W1213 01:58:08.112052    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112074 kubelet[3241]: E1213 01:58:08.112063    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112184 kubelet[3241]: E1213 01:58:08.112179    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112205 kubelet[3241]: W1213 01:58:08.112185    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112205 kubelet[3241]: E1213 01:58:08.112197    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112307 kubelet[3241]: E1213 01:58:08.112301    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112333 kubelet[3241]: W1213 01:58:08.112307    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112333 kubelet[3241]: E1213 01:58:08.112318    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112444 kubelet[3241]: E1213 01:58:08.112439    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112465 kubelet[3241]: W1213 01:58:08.112445    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112465 kubelet[3241]: E1213 01:58:08.112456    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112559 kubelet[3241]: E1213 01:58:08.112553    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112579 kubelet[3241]: W1213 01:58:08.112560    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112579 kubelet[3241]: E1213 01:58:08.112571    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112669 kubelet[3241]: E1213 01:58:08.112664    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112694 kubelet[3241]: W1213 01:58:08.112670    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112694 kubelet[3241]: E1213 01:58:08.112680    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112800 kubelet[3241]: E1213 01:58:08.112794    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112820 kubelet[3241]: W1213 01:58:08.112801    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112820 kubelet[3241]: E1213 01:58:08.112810    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.112902 kubelet[3241]: E1213 01:58:08.112897    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.112922 kubelet[3241]: W1213 01:58:08.112903    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.112922 kubelet[3241]: E1213 01:58:08.112913    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.170903 containerd[1821]: time="2024-12-13T01:58:08.170787975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4bcc57b4-sss9s,Uid:1fba57cf-c153-4318-bdb8-19f2b2d32285,Namespace:calico-system,Attempt:0,}"
Dec 13 01:58:08.181934 containerd[1821]: time="2024-12-13T01:58:08.181706862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:08.181934 containerd[1821]: time="2024-12-13T01:58:08.181926339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:08.182026 containerd[1821]: time="2024-12-13T01:58:08.181937692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:08.182026 containerd[1821]: time="2024-12-13T01:58:08.181977715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:08.187691 containerd[1821]: time="2024-12-13T01:58:08.187639748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jpmxm,Uid:9898e09d-0836-4458-b740-3b468b41d60c,Namespace:calico-system,Attempt:0,}"
Dec 13 01:58:08.197061 containerd[1821]: time="2024-12-13T01:58:08.196951855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:08.197061 containerd[1821]: time="2024-12-13T01:58:08.197022741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:08.197294 containerd[1821]: time="2024-12-13T01:58:08.197246743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:08.197318 containerd[1821]: time="2024-12-13T01:58:08.197295865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:08.199801 systemd[1]: Started cri-containerd-3f892a6052396164d9cee071a3406aedf8583eee5b0e1943929bc2959bcd9f1f.scope - libcontainer container 3f892a6052396164d9cee071a3406aedf8583eee5b0e1943929bc2959bcd9f1f.
Dec 13 01:58:08.202864 systemd[1]: Started cri-containerd-0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe.scope - libcontainer container 0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe.
Dec 13 01:58:08.212518 kubelet[3241]: E1213 01:58:08.212503    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.212518 kubelet[3241]: W1213 01:58:08.212515    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.212627 kubelet[3241]: E1213 01:58:08.212530    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.212691 kubelet[3241]: E1213 01:58:08.212685    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.212691 kubelet[3241]: W1213 01:58:08.212690    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.212749 kubelet[3241]: E1213 01:58:08.212699    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.212806 kubelet[3241]: E1213 01:58:08.212800    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.212806 kubelet[3241]: W1213 01:58:08.212805    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.212871 kubelet[3241]: E1213 01:58:08.212812    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.212947 kubelet[3241]: E1213 01:58:08.212937    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.212976 kubelet[3241]: W1213 01:58:08.212948    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.212976 kubelet[3241]: E1213 01:58:08.212965    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213077 kubelet[3241]: E1213 01:58:08.213069    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213077 kubelet[3241]: W1213 01:58:08.213075    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213138 kubelet[3241]: E1213 01:58:08.213085    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213206 kubelet[3241]: E1213 01:58:08.213200    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213228 kubelet[3241]: W1213 01:58:08.213205    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213228 kubelet[3241]: E1213 01:58:08.213214    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213294 containerd[1821]: time="2024-12-13T01:58:08.213218541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jpmxm,Uid:9898e09d-0836-4458-b740-3b468b41d60c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe\""
Dec 13 01:58:08.213325 kubelet[3241]: E1213 01:58:08.213319    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213325 kubelet[3241]: W1213 01:58:08.213324    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213370 kubelet[3241]: E1213 01:58:08.213332    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213429 kubelet[3241]: E1213 01:58:08.213423    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213429 kubelet[3241]: W1213 01:58:08.213428    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213478 kubelet[3241]: E1213 01:58:08.213436    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213520 kubelet[3241]: E1213 01:58:08.213513    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213520 kubelet[3241]: W1213 01:58:08.213518    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213581 kubelet[3241]: E1213 01:58:08.213525    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213626 kubelet[3241]: E1213 01:58:08.213601    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213626 kubelet[3241]: W1213 01:58:08.213605    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213695 kubelet[3241]: E1213 01:58:08.213639    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213753 kubelet[3241]: E1213 01:58:08.213746    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213753 kubelet[3241]: W1213 01:58:08.213751    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213793 kubelet[3241]: E1213 01:58:08.213758    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213847 kubelet[3241]: E1213 01:58:08.213842    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213878 kubelet[3241]: W1213 01:58:08.213848    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.213878 kubelet[3241]: E1213 01:58:08.213856    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.213951 kubelet[3241]: E1213 01:58:08.213944    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.213951 kubelet[3241]: W1213 01:58:08.213951    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214003 kubelet[3241]: E1213 01:58:08.213960    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214043 kubelet[3241]: E1213 01:58:08.214037    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214043 kubelet[3241]: W1213 01:58:08.214043    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214096 kubelet[3241]: E1213 01:58:08.214054    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214121 containerd[1821]: time="2024-12-13T01:58:08.214048741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Dec 13 01:58:08.214165 kubelet[3241]: E1213 01:58:08.214157    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214189 kubelet[3241]: W1213 01:58:08.214167    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214189 kubelet[3241]: E1213 01:58:08.214178    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214286 kubelet[3241]: E1213 01:58:08.214281    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214286 kubelet[3241]: W1213 01:58:08.214286    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214351 kubelet[3241]: E1213 01:58:08.214295    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214422 kubelet[3241]: E1213 01:58:08.214415    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214422 kubelet[3241]: W1213 01:58:08.214422    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214481 kubelet[3241]: E1213 01:58:08.214435    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214528 kubelet[3241]: E1213 01:58:08.214522    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214560 kubelet[3241]: W1213 01:58:08.214528    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214560 kubelet[3241]: E1213 01:58:08.214540    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214637 kubelet[3241]: E1213 01:58:08.214631    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214637 kubelet[3241]: W1213 01:58:08.214636    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214696 kubelet[3241]: E1213 01:58:08.214647    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214751 kubelet[3241]: E1213 01:58:08.214745    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214751 kubelet[3241]: W1213 01:58:08.214751    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214813 kubelet[3241]: E1213 01:58:08.214761    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214899 kubelet[3241]: E1213 01:58:08.214894    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.214899 kubelet[3241]: W1213 01:58:08.214898    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.214941 kubelet[3241]: E1213 01:58:08.214906    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.214994 kubelet[3241]: E1213 01:58:08.214989    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.215013 kubelet[3241]: W1213 01:58:08.214994    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.215013 kubelet[3241]: E1213 01:58:08.215004    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.215082 kubelet[3241]: E1213 01:58:08.215078    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.215103 kubelet[3241]: W1213 01:58:08.215082    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.215103 kubelet[3241]: E1213 01:58:08.215089    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.215191 kubelet[3241]: E1213 01:58:08.215187    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.215191 kubelet[3241]: W1213 01:58:08.215191    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.215224 kubelet[3241]: E1213 01:58:08.215196    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.215340 kubelet[3241]: E1213 01:58:08.215335    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.215340 kubelet[3241]: W1213 01:58:08.215339    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.215374 kubelet[3241]: E1213 01:58:08.215345    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.220275 kubelet[3241]: E1213 01:58:08.220264    3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Dec 13 01:58:08.220275 kubelet[3241]: W1213 01:58:08.220273    3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Dec 13 01:58:08.220347 kubelet[3241]: E1213 01:58:08.220286    3241 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Dec 13 01:58:08.223068 containerd[1821]: time="2024-12-13T01:58:08.223051058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4bcc57b4-sss9s,Uid:1fba57cf-c153-4318-bdb8-19f2b2d32285,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f892a6052396164d9cee071a3406aedf8583eee5b0e1943929bc2959bcd9f1f\""
Dec 13 01:58:09.348148 kubelet[3241]: E1213 01:58:09.348052    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb2fq" podUID="d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1"
Dec 13 01:58:09.628925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695445933.mount: Deactivated successfully.
Dec 13 01:58:09.668101 containerd[1821]: time="2024-12-13T01:58:09.668080412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:09.668292 containerd[1821]: time="2024-12-13T01:58:09.668270563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343"
Dec 13 01:58:09.668652 containerd[1821]: time="2024-12-13T01:58:09.668602547Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:09.669508 containerd[1821]: time="2024-12-13T01:58:09.669469914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:09.670459 containerd[1821]: time="2024-12-13T01:58:09.670433829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.456353937s"
Dec 13 01:58:09.670518 containerd[1821]: time="2024-12-13T01:58:09.670461722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\""
Dec 13 01:58:09.670853 containerd[1821]: time="2024-12-13T01:58:09.670843237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Dec 13 01:58:09.671324 containerd[1821]: time="2024-12-13T01:58:09.671312898Z" level=info msg="CreateContainer within sandbox \"0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Dec 13 01:58:09.676200 containerd[1821]: time="2024-12-13T01:58:09.676143387Z" level=info msg="CreateContainer within sandbox \"0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f\""
Dec 13 01:58:09.676421 containerd[1821]: time="2024-12-13T01:58:09.676410658Z" level=info msg="StartContainer for \"0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f\""
Dec 13 01:58:09.701141 systemd[1]: Started cri-containerd-0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f.scope - libcontainer container 0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f.
Dec 13 01:58:09.716433 containerd[1821]: time="2024-12-13T01:58:09.716404599Z" level=info msg="StartContainer for \"0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f\" returns successfully"
Dec 13 01:58:09.723211 systemd[1]: cri-containerd-0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f.scope: Deactivated successfully.
Dec 13 01:58:09.972697 containerd[1821]: time="2024-12-13T01:58:09.972573036Z" level=info msg="shim disconnected" id=0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f namespace=k8s.io
Dec 13 01:58:09.972697 containerd[1821]: time="2024-12-13T01:58:09.972600744Z" level=warning msg="cleaning up after shim disconnected" id=0fdf6602d6961c31783ce461930faaef4071bd7178cb4a170a9ea004001c423f namespace=k8s.io
Dec 13 01:58:09.972697 containerd[1821]: time="2024-12-13T01:58:09.972631727Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 13 01:58:11.225521 containerd[1821]: time="2024-12-13T01:58:11.225468484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:11.225730 containerd[1821]: time="2024-12-13T01:58:11.225618701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141"
Dec 13 01:58:11.225957 containerd[1821]: time="2024-12-13T01:58:11.225917684Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:11.227108 containerd[1821]: time="2024-12-13T01:58:11.227066539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:11.227373 containerd[1821]: time="2024-12-13T01:58:11.227330539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.556469446s"
Dec 13 01:58:11.227373 containerd[1821]: time="2024-12-13T01:58:11.227344451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\""
Dec 13 01:58:11.227598 containerd[1821]: time="2024-12-13T01:58:11.227588490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Dec 13 01:58:11.230641 containerd[1821]: time="2024-12-13T01:58:11.230580099Z" level=info msg="CreateContainer within sandbox \"3f892a6052396164d9cee071a3406aedf8583eee5b0e1943929bc2959bcd9f1f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Dec 13 01:58:11.235421 containerd[1821]: time="2024-12-13T01:58:11.235400369Z" level=info msg="CreateContainer within sandbox \"3f892a6052396164d9cee071a3406aedf8583eee5b0e1943929bc2959bcd9f1f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8b22acce2e4881464e7069f13e512f37f476819acaf1f95339db548da4cc29d0\""
Dec 13 01:58:11.235686 containerd[1821]: time="2024-12-13T01:58:11.235637317Z" level=info msg="StartContainer for \"8b22acce2e4881464e7069f13e512f37f476819acaf1f95339db548da4cc29d0\""
Dec 13 01:58:11.256935 systemd[1]: Started cri-containerd-8b22acce2e4881464e7069f13e512f37f476819acaf1f95339db548da4cc29d0.scope - libcontainer container 8b22acce2e4881464e7069f13e512f37f476819acaf1f95339db548da4cc29d0.
Dec 13 01:58:11.281999 containerd[1821]: time="2024-12-13T01:58:11.281974751Z" level=info msg="StartContainer for \"8b22acce2e4881464e7069f13e512f37f476819acaf1f95339db548da4cc29d0\" returns successfully"
Dec 13 01:58:11.348035 kubelet[3241]: E1213 01:58:11.347988    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb2fq" podUID="d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1"
Dec 13 01:58:11.430919 kubelet[3241]: I1213 01:58:11.430891    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5b4bcc57b4-sss9s" podStartSLOduration=1.4268839309999999 podStartE2EDuration="4.430840578s" podCreationTimestamp="2024-12-13 01:58:07 +0000 UTC" firstStartedPulling="2024-12-13 01:58:08.223551079 +0000 UTC m=+22.937730723" lastFinishedPulling="2024-12-13 01:58:11.227507726 +0000 UTC m=+25.941687370" observedRunningTime="2024-12-13 01:58:11.430651955 +0000 UTC m=+26.144831606" watchObservedRunningTime="2024-12-13 01:58:11.430840578 +0000 UTC m=+26.145020225"
Dec 13 01:58:13.347599 kubelet[3241]: E1213 01:58:13.347575    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mb2fq" podUID="d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1"
Dec 13 01:58:13.462703 containerd[1821]: time="2024-12-13T01:58:13.462659024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:13.462912 containerd[1821]: time="2024-12-13T01:58:13.462811104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154"
Dec 13 01:58:13.463238 containerd[1821]: time="2024-12-13T01:58:13.463185253Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:13.464242 containerd[1821]: time="2024-12-13T01:58:13.464195404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:13.464679 containerd[1821]: time="2024-12-13T01:58:13.464649523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.237046413s"
Dec 13 01:58:13.464679 containerd[1821]: time="2024-12-13T01:58:13.464664020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\""
Dec 13 01:58:13.465571 containerd[1821]: time="2024-12-13T01:58:13.465530693Z" level=info msg="CreateContainer within sandbox \"0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Dec 13 01:58:13.470196 containerd[1821]: time="2024-12-13T01:58:13.470154363Z" level=info msg="CreateContainer within sandbox \"0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088\""
Dec 13 01:58:13.470377 containerd[1821]: time="2024-12-13T01:58:13.470362957Z" level=info msg="StartContainer for \"68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088\""
Dec 13 01:58:13.500798 systemd[1]: Started cri-containerd-68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088.scope - libcontainer container 68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088.
Dec 13 01:58:13.514459 containerd[1821]: time="2024-12-13T01:58:13.514434047Z" level=info msg="StartContainer for \"68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088\" returns successfully"
Dec 13 01:58:14.086504 systemd[1]: cri-containerd-68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088.scope: Deactivated successfully.
Dec 13 01:58:14.097946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088-rootfs.mount: Deactivated successfully.
Dec 13 01:58:14.119902 kubelet[3241]: I1213 01:58:14.119852    3241 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Dec 13 01:58:14.139416 kubelet[3241]: I1213 01:58:14.138888    3241 topology_manager.go:215] "Topology Admit Handler" podUID="7ed7e9e0-19d8-475e-a8ae-40451cd7fa24" podNamespace="kube-system" podName="coredns-76f75df574-z6dmq"
Dec 13 01:58:14.139658 kubelet[3241]: I1213 01:58:14.139496    3241 topology_manager.go:215] "Topology Admit Handler" podUID="d92b1d5f-d865-4f0e-9a3d-e2c1434149e2" podNamespace="kube-system" podName="coredns-76f75df574-xqxt6"
Dec 13 01:58:14.140103 kubelet[3241]: I1213 01:58:14.140084    3241 topology_manager.go:215] "Topology Admit Handler" podUID="7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7" podNamespace="calico-system" podName="calico-kube-controllers-6bb7d9fc96-j42rn"
Dec 13 01:58:14.140559 kubelet[3241]: I1213 01:58:14.140536    3241 topology_manager.go:215] "Topology Admit Handler" podUID="2554f98a-0bc4-4c51-ab0a-a4257428bec4" podNamespace="calico-apiserver" podName="calico-apiserver-67488db8c5-872x9"
Dec 13 01:58:14.141196 kubelet[3241]: I1213 01:58:14.141161    3241 topology_manager.go:215] "Topology Admit Handler" podUID="a97e1d12-a7fb-4125-b6e6-7d31835664c3" podNamespace="calico-apiserver" podName="calico-apiserver-67488db8c5-fjmw4"
Dec 13 01:58:14.148887 systemd[1]: Created slice kubepods-burstable-pod7ed7e9e0_19d8_475e_a8ae_40451cd7fa24.slice - libcontainer container kubepods-burstable-pod7ed7e9e0_19d8_475e_a8ae_40451cd7fa24.slice.
Dec 13 01:58:14.156004 kubelet[3241]: I1213 01:58:14.155960    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7-tigera-ca-bundle\") pod \"calico-kube-controllers-6bb7d9fc96-j42rn\" (UID: \"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7\") " pod="calico-system/calico-kube-controllers-6bb7d9fc96-j42rn"
Dec 13 01:58:14.156187 kubelet[3241]: I1213 01:58:14.156038    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr288\" (UniqueName: \"kubernetes.io/projected/a97e1d12-a7fb-4125-b6e6-7d31835664c3-kube-api-access-vr288\") pod \"calico-apiserver-67488db8c5-fjmw4\" (UID: \"a97e1d12-a7fb-4125-b6e6-7d31835664c3\") " pod="calico-apiserver/calico-apiserver-67488db8c5-fjmw4"
Dec 13 01:58:14.156187 kubelet[3241]: I1213 01:58:14.156102    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ed7e9e0-19d8-475e-a8ae-40451cd7fa24-config-volume\") pod \"coredns-76f75df574-z6dmq\" (UID: \"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24\") " pod="kube-system/coredns-76f75df574-z6dmq"
Dec 13 01:58:14.156399 kubelet[3241]: I1213 01:58:14.156185    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dxm8\" (UniqueName: \"kubernetes.io/projected/2554f98a-0bc4-4c51-ab0a-a4257428bec4-kube-api-access-4dxm8\") pod \"calico-apiserver-67488db8c5-872x9\" (UID: \"2554f98a-0bc4-4c51-ab0a-a4257428bec4\") " pod="calico-apiserver/calico-apiserver-67488db8c5-872x9"
Dec 13 01:58:14.156399 kubelet[3241]: I1213 01:58:14.156333    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2554f98a-0bc4-4c51-ab0a-a4257428bec4-calico-apiserver-certs\") pod \"calico-apiserver-67488db8c5-872x9\" (UID: \"2554f98a-0bc4-4c51-ab0a-a4257428bec4\") " pod="calico-apiserver/calico-apiserver-67488db8c5-872x9"
Dec 13 01:58:14.156544 kubelet[3241]: I1213 01:58:14.156408    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htkjr\" (UniqueName: \"kubernetes.io/projected/7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7-kube-api-access-htkjr\") pod \"calico-kube-controllers-6bb7d9fc96-j42rn\" (UID: \"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7\") " pod="calico-system/calico-kube-controllers-6bb7d9fc96-j42rn"
Dec 13 01:58:14.156544 kubelet[3241]: I1213 01:58:14.156478    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a97e1d12-a7fb-4125-b6e6-7d31835664c3-calico-apiserver-certs\") pod \"calico-apiserver-67488db8c5-fjmw4\" (UID: \"a97e1d12-a7fb-4125-b6e6-7d31835664c3\") " pod="calico-apiserver/calico-apiserver-67488db8c5-fjmw4"
Dec 13 01:58:14.156688 kubelet[3241]: I1213 01:58:14.156545    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66lml\" (UniqueName: \"kubernetes.io/projected/7ed7e9e0-19d8-475e-a8ae-40451cd7fa24-kube-api-access-66lml\") pod \"coredns-76f75df574-z6dmq\" (UID: \"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24\") " pod="kube-system/coredns-76f75df574-z6dmq"
Dec 13 01:58:14.156688 kubelet[3241]: I1213 01:58:14.156677    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d92b1d5f-d865-4f0e-9a3d-e2c1434149e2-config-volume\") pod \"coredns-76f75df574-xqxt6\" (UID: \"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2\") " pod="kube-system/coredns-76f75df574-xqxt6"
Dec 13 01:58:14.156605 systemd[1]: Created slice kubepods-burstable-podd92b1d5f_d865_4f0e_9a3d_e2c1434149e2.slice - libcontainer container kubepods-burstable-podd92b1d5f_d865_4f0e_9a3d_e2c1434149e2.slice.
Dec 13 01:58:14.156926 kubelet[3241]: I1213 01:58:14.156739    3241 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbmd\" (UniqueName: \"kubernetes.io/projected/d92b1d5f-d865-4f0e-9a3d-e2c1434149e2-kube-api-access-qqbmd\") pod \"coredns-76f75df574-xqxt6\" (UID: \"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2\") " pod="kube-system/coredns-76f75df574-xqxt6"
Dec 13 01:58:14.163581 systemd[1]: Created slice kubepods-besteffort-pod7fad96fe_4b31_4e48_ab5d_5f0fa08ebbe7.slice - libcontainer container kubepods-besteffort-pod7fad96fe_4b31_4e48_ab5d_5f0fa08ebbe7.slice.
Dec 13 01:58:14.172333 systemd[1]: Created slice kubepods-besteffort-pod2554f98a_0bc4_4c51_ab0a_a4257428bec4.slice - libcontainer container kubepods-besteffort-pod2554f98a_0bc4_4c51_ab0a_a4257428bec4.slice.
Dec 13 01:58:14.178978 systemd[1]: Created slice kubepods-besteffort-poda97e1d12_a7fb_4125_b6e6_7d31835664c3.slice - libcontainer container kubepods-besteffort-poda97e1d12_a7fb_4125_b6e6_7d31835664c3.slice.
Dec 13 01:58:14.453826 containerd[1821]: time="2024-12-13T01:58:14.453687184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6dmq,Uid:7ed7e9e0-19d8-475e-a8ae-40451cd7fa24,Namespace:kube-system,Attempt:0,}"
Dec 13 01:58:14.461817 containerd[1821]: time="2024-12-13T01:58:14.461741966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xqxt6,Uid:d92b1d5f-d865-4f0e-9a3d-e2c1434149e2,Namespace:kube-system,Attempt:0,}"
Dec 13 01:58:14.468875 containerd[1821]: time="2024-12-13T01:58:14.468760486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d9fc96-j42rn,Uid:7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7,Namespace:calico-system,Attempt:0,}"
Dec 13 01:58:14.476529 containerd[1821]: time="2024-12-13T01:58:14.476484772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-872x9,Uid:2554f98a-0bc4-4c51-ab0a-a4257428bec4,Namespace:calico-apiserver,Attempt:0,}"
Dec 13 01:58:14.481980 containerd[1821]: time="2024-12-13T01:58:14.481954711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-fjmw4,Uid:a97e1d12-a7fb-4125-b6e6-7d31835664c3,Namespace:calico-apiserver,Attempt:0,}"
Dec 13 01:58:14.763035 containerd[1821]: time="2024-12-13T01:58:14.762854218Z" level=info msg="shim disconnected" id=68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088 namespace=k8s.io
Dec 13 01:58:14.763035 containerd[1821]: time="2024-12-13T01:58:14.762902889Z" level=warning msg="cleaning up after shim disconnected" id=68a81895b2746e2f99f7f78d8e14d6c700bafc26c74735a0042997c01b076088 namespace=k8s.io
Dec 13 01:58:14.763035 containerd[1821]: time="2024-12-13T01:58:14.762924209Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 13 01:58:14.801669 containerd[1821]: time="2024-12-13T01:58:14.801636183Z" level=error msg="Failed to destroy network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.801877 containerd[1821]: time="2024-12-13T01:58:14.801863121Z" level=error msg="encountered an error cleaning up failed sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.801915 containerd[1821]: time="2024-12-13T01:58:14.801895454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6dmq,Uid:7ed7e9e0-19d8-475e-a8ae-40451cd7fa24,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.802069 kubelet[3241]: E1213 01:58:14.802055    3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.802315 kubelet[3241]: E1213 01:58:14.802104    3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z6dmq"
Dec 13 01:58:14.802315 kubelet[3241]: E1213 01:58:14.802127    3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z6dmq"
Dec 13 01:58:14.802315 kubelet[3241]: E1213 01:58:14.802176    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z6dmq_kube-system(7ed7e9e0-19d8-475e-a8ae-40451cd7fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z6dmq_kube-system(7ed7e9e0-19d8-475e-a8ae-40451cd7fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z6dmq" podUID="7ed7e9e0-19d8-475e-a8ae-40451cd7fa24"
Dec 13 01:58:14.810140 containerd[1821]: time="2024-12-13T01:58:14.810079815Z" level=error msg="Failed to destroy network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.810305 containerd[1821]: time="2024-12-13T01:58:14.810291243Z" level=error msg="encountered an error cleaning up failed sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.810357 containerd[1821]: time="2024-12-13T01:58:14.810328231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xqxt6,Uid:d92b1d5f-d865-4f0e-9a3d-e2c1434149e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.810467 kubelet[3241]: E1213 01:58:14.810453    3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.810498 kubelet[3241]: E1213 01:58:14.810489    3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xqxt6"
Dec 13 01:58:14.810522 kubelet[3241]: E1213 01:58:14.810504    3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xqxt6"
Dec 13 01:58:14.810550 kubelet[3241]: E1213 01:58:14.810541    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xqxt6_kube-system(d92b1d5f-d865-4f0e-9a3d-e2c1434149e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xqxt6_kube-system(d92b1d5f-d865-4f0e-9a3d-e2c1434149e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xqxt6" podUID="d92b1d5f-d865-4f0e-9a3d-e2c1434149e2"
Dec 13 01:58:14.810915 containerd[1821]: time="2024-12-13T01:58:14.810896335Z" level=error msg="Failed to destroy network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.810966 containerd[1821]: time="2024-12-13T01:58:14.810949945Z" level=error msg="Failed to destroy network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811069 containerd[1821]: time="2024-12-13T01:58:14.811055866Z" level=error msg="encountered an error cleaning up failed sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811094 containerd[1821]: time="2024-12-13T01:58:14.811079836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d9fc96-j42rn,Uid:7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811134 containerd[1821]: time="2024-12-13T01:58:14.811104185Z" level=error msg="encountered an error cleaning up failed sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811134 containerd[1821]: time="2024-12-13T01:58:14.811126371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-872x9,Uid:2554f98a-0bc4-4c51-ab0a-a4257428bec4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811218 containerd[1821]: time="2024-12-13T01:58:14.811159421Z" level=error msg="Failed to destroy network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811255 kubelet[3241]: E1213 01:58:14.811166    3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811255 kubelet[3241]: E1213 01:58:14.811195    3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb7d9fc96-j42rn"
Dec 13 01:58:14.811255 kubelet[3241]: E1213 01:58:14.811199    3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811255 kubelet[3241]: E1213 01:58:14.811218    3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb7d9fc96-j42rn"
Dec 13 01:58:14.811365 kubelet[3241]: E1213 01:58:14.811224    3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67488db8c5-872x9"
Dec 13 01:58:14.811365 kubelet[3241]: E1213 01:58:14.811238    3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67488db8c5-872x9"
Dec 13 01:58:14.811365 kubelet[3241]: E1213 01:58:14.811255    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb7d9fc96-j42rn_calico-system(7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb7d9fc96-j42rn_calico-system(7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb7d9fc96-j42rn" podUID="7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7"
Dec 13 01:58:14.811436 containerd[1821]: time="2024-12-13T01:58:14.811292625Z" level=error msg="encountered an error cleaning up failed sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811436 containerd[1821]: time="2024-12-13T01:58:14.811314326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-fjmw4,Uid:a97e1d12-a7fb-4125-b6e6-7d31835664c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811484 kubelet[3241]: E1213 01:58:14.811267    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67488db8c5-872x9_calico-apiserver(2554f98a-0bc4-4c51-ab0a-a4257428bec4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67488db8c5-872x9_calico-apiserver(2554f98a-0bc4-4c51-ab0a-a4257428bec4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67488db8c5-872x9" podUID="2554f98a-0bc4-4c51-ab0a-a4257428bec4"
Dec 13 01:58:14.811484 kubelet[3241]: E1213 01:58:14.811383    3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:14.811484 kubelet[3241]: E1213 01:58:14.811402    3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67488db8c5-fjmw4"
Dec 13 01:58:14.811551 kubelet[3241]: E1213 01:58:14.811414    3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67488db8c5-fjmw4"
Dec 13 01:58:14.811551 kubelet[3241]: E1213 01:58:14.811436    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67488db8c5-fjmw4_calico-apiserver(a97e1d12-a7fb-4125-b6e6-7d31835664c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67488db8c5-fjmw4_calico-apiserver(a97e1d12-a7fb-4125-b6e6-7d31835664c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67488db8c5-fjmw4" podUID="a97e1d12-a7fb-4125-b6e6-7d31835664c3"
Dec 13 01:58:15.363872 systemd[1]: Created slice kubepods-besteffort-podd3ce2b11_2f1c_4ce7_9b72_15c8cfc358a1.slice - libcontainer container kubepods-besteffort-podd3ce2b11_2f1c_4ce7_9b72_15c8cfc358a1.slice.
Dec 13 01:58:15.369421 containerd[1821]: time="2024-12-13T01:58:15.369296092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb2fq,Uid:d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1,Namespace:calico-system,Attempt:0,}"
Dec 13 01:58:15.400105 containerd[1821]: time="2024-12-13T01:58:15.400077634Z" level=error msg="Failed to destroy network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.400310 containerd[1821]: time="2024-12-13T01:58:15.400294041Z" level=error msg="encountered an error cleaning up failed sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.400351 containerd[1821]: time="2024-12-13T01:58:15.400335021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb2fq,Uid:d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.400506 kubelet[3241]: E1213 01:58:15.400495    3241 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.400546 kubelet[3241]: E1213 01:58:15.400526    3241 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mb2fq"
Dec 13 01:58:15.400546 kubelet[3241]: E1213 01:58:15.400543    3241 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mb2fq"
Dec 13 01:58:15.400583 kubelet[3241]: E1213 01:58:15.400575    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mb2fq_calico-system(d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mb2fq_calico-system(d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mb2fq" podUID="d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1"
Dec 13 01:58:15.433303 kubelet[3241]: I1213 01:58:15.433288    3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:15.433600 containerd[1821]: time="2024-12-13T01:58:15.433583148Z" level=info msg="StopPodSandbox for \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\""
Dec 13 01:58:15.433724 containerd[1821]: time="2024-12-13T01:58:15.433681025Z" level=info msg="Ensure that sandbox cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662 in task-service has been cleanup successfully"
Dec 13 01:58:15.433765 kubelet[3241]: I1213 01:58:15.433719    3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:15.433953 containerd[1821]: time="2024-12-13T01:58:15.433938451Z" level=info msg="StopPodSandbox for \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\""
Dec 13 01:58:15.434051 containerd[1821]: time="2024-12-13T01:58:15.434037299Z" level=info msg="Ensure that sandbox d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128 in task-service has been cleanup successfully"
Dec 13 01:58:15.435092 kubelet[3241]: I1213 01:58:15.435080    3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:15.435158 containerd[1821]: time="2024-12-13T01:58:15.435115040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Dec 13 01:58:15.435323 containerd[1821]: time="2024-12-13T01:58:15.435308731Z" level=info msg="StopPodSandbox for \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\""
Dec 13 01:58:15.435434 containerd[1821]: time="2024-12-13T01:58:15.435421903Z" level=info msg="Ensure that sandbox 9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac in task-service has been cleanup successfully"
Dec 13 01:58:15.435679 kubelet[3241]: I1213 01:58:15.435667    3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:15.435992 containerd[1821]: time="2024-12-13T01:58:15.435937382Z" level=info msg="StopPodSandbox for \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\""
Dec 13 01:58:15.436067 containerd[1821]: time="2024-12-13T01:58:15.436054197Z" level=info msg="Ensure that sandbox a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610 in task-service has been cleanup successfully"
Dec 13 01:58:15.436315 kubelet[3241]: I1213 01:58:15.436302    3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:15.436693 containerd[1821]: time="2024-12-13T01:58:15.436669110Z" level=info msg="StopPodSandbox for \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\""
Dec 13 01:58:15.436850 containerd[1821]: time="2024-12-13T01:58:15.436839496Z" level=info msg="Ensure that sandbox 265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db in task-service has been cleanup successfully"
Dec 13 01:58:15.436907 kubelet[3241]: I1213 01:58:15.436896    3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:15.437193 containerd[1821]: time="2024-12-13T01:58:15.437178004Z" level=info msg="StopPodSandbox for \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\""
Dec 13 01:58:15.437305 containerd[1821]: time="2024-12-13T01:58:15.437296175Z" level=info msg="Ensure that sandbox 58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6 in task-service has been cleanup successfully"
Dec 13 01:58:15.451423 containerd[1821]: time="2024-12-13T01:58:15.451377055Z" level=error msg="StopPodSandbox for \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\" failed" error="failed to destroy network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.451649 kubelet[3241]: E1213 01:58:15.451631    3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:15.451718 kubelet[3241]: E1213 01:58:15.451707    3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"}
Dec 13 01:58:15.451760 kubelet[3241]: E1213 01:58:15.451751    3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a97e1d12-a7fb-4125-b6e6-7d31835664c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 01:58:15.451820 kubelet[3241]: E1213 01:58:15.451782    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a97e1d12-a7fb-4125-b6e6-7d31835664c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67488db8c5-fjmw4" podUID="a97e1d12-a7fb-4125-b6e6-7d31835664c3"
Dec 13 01:58:15.452305 containerd[1821]: time="2024-12-13T01:58:15.452281276Z" level=error msg="StopPodSandbox for \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\" failed" error="failed to destroy network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.452415 containerd[1821]: time="2024-12-13T01:58:15.452293346Z" level=error msg="StopPodSandbox for \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\" failed" error="failed to destroy network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.452445 kubelet[3241]: E1213 01:58:15.452381    3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:15.452445 kubelet[3241]: E1213 01:58:15.452415    3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:15.452445 kubelet[3241]: E1213 01:58:15.452419    3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"}
Dec 13 01:58:15.452445 kubelet[3241]: E1213 01:58:15.452428    3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"}
Dec 13 01:58:15.452445 kubelet[3241]: E1213 01:58:15.452438    3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 01:58:15.452581 kubelet[3241]: E1213 01:58:15.452447    3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 01:58:15.452581 kubelet[3241]: E1213 01:58:15.452457    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z6dmq" podUID="7ed7e9e0-19d8-475e-a8ae-40451cd7fa24"
Dec 13 01:58:15.452581 kubelet[3241]: E1213 01:58:15.452465    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xqxt6" podUID="d92b1d5f-d865-4f0e-9a3d-e2c1434149e2"
Dec 13 01:58:15.452684 containerd[1821]: time="2024-12-13T01:58:15.452477088Z" level=error msg="StopPodSandbox for \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\" failed" error="failed to destroy network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.452707 kubelet[3241]: E1213 01:58:15.452564    3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:15.452707 kubelet[3241]: E1213 01:58:15.452574    3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"}
Dec 13 01:58:15.452707 kubelet[3241]: E1213 01:58:15.452589    3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2554f98a-0bc4-4c51-ab0a-a4257428bec4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 01:58:15.452707 kubelet[3241]: E1213 01:58:15.452603    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2554f98a-0bc4-4c51-ab0a-a4257428bec4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67488db8c5-872x9" podUID="2554f98a-0bc4-4c51-ab0a-a4257428bec4"
Dec 13 01:58:15.452808 containerd[1821]: time="2024-12-13T01:58:15.452745682Z" level=error msg="StopPodSandbox for \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\" failed" error="failed to destroy network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.452832 kubelet[3241]: E1213 01:58:15.452811    3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:15.452832 kubelet[3241]: E1213 01:58:15.452820    3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"}
Dec 13 01:58:15.452866 kubelet[3241]: E1213 01:58:15.452838    3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 01:58:15.452866 kubelet[3241]: E1213 01:58:15.452851    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mb2fq" podUID="d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1"
Dec 13 01:58:15.453578 containerd[1821]: time="2024-12-13T01:58:15.453563233Z" level=error msg="StopPodSandbox for \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\" failed" error="failed to destroy network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Dec 13 01:58:15.453648 kubelet[3241]: E1213 01:58:15.453638    3241 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:15.453692 kubelet[3241]: E1213 01:58:15.453657    3241 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"}
Dec 13 01:58:15.453692 kubelet[3241]: E1213 01:58:15.453686    3241 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Dec 13 01:58:15.453760 kubelet[3241]: E1213 01:58:15.453709    3241 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb7d9fc96-j42rn" podUID="7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7"
Dec 13 01:58:15.476941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128-shm.mount: Deactivated successfully.
Dec 13 01:58:15.477200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db-shm.mount: Deactivated successfully.
Dec 13 01:58:15.477397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6-shm.mount: Deactivated successfully.
Dec 13 01:58:15.477585 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662-shm.mount: Deactivated successfully.
Dec 13 01:58:15.477800 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610-shm.mount: Deactivated successfully.
Dec 13 01:58:18.483869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904437099.mount: Deactivated successfully.
Dec 13 01:58:18.507618 containerd[1821]: time="2024-12-13T01:58:18.507568711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:18.507792 containerd[1821]: time="2024-12-13T01:58:18.507759997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010"
Dec 13 01:58:18.508093 containerd[1821]: time="2024-12-13T01:58:18.508080805Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:18.509096 containerd[1821]: time="2024-12-13T01:58:18.509083320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:18.509448 containerd[1821]: time="2024-12-13T01:58:18.509435089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.074299817s"
Dec 13 01:58:18.509489 containerd[1821]: time="2024-12-13T01:58:18.509449557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\""
Dec 13 01:58:18.512866 containerd[1821]: time="2024-12-13T01:58:18.512816506Z" level=info msg="CreateContainer within sandbox \"0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Dec 13 01:58:18.518395 containerd[1821]: time="2024-12-13T01:58:18.518351990Z" level=info msg="CreateContainer within sandbox \"0dd59f0e9421fc55ea15a8544af793bf85844936bc01e33ed51f0085ea543efe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7cb2d82bb938ac6f57458b8b530923bc59defa2372cc02548048f19174d301d4\""
Dec 13 01:58:18.518583 containerd[1821]: time="2024-12-13T01:58:18.518569394Z" level=info msg="StartContainer for \"7cb2d82bb938ac6f57458b8b530923bc59defa2372cc02548048f19174d301d4\""
Dec 13 01:58:18.537902 systemd[1]: Started cri-containerd-7cb2d82bb938ac6f57458b8b530923bc59defa2372cc02548048f19174d301d4.scope - libcontainer container 7cb2d82bb938ac6f57458b8b530923bc59defa2372cc02548048f19174d301d4.
Dec 13 01:58:18.552416 containerd[1821]: time="2024-12-13T01:58:18.552387969Z" level=info msg="StartContainer for \"7cb2d82bb938ac6f57458b8b530923bc59defa2372cc02548048f19174d301d4\" returns successfully"
Dec 13 01:58:18.610571 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Dec 13 01:58:18.610630 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Dec 13 01:58:19.467517 kubelet[3241]: I1213 01:58:19.467497    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-jpmxm" podStartSLOduration=2.171749914 podStartE2EDuration="12.467467356s" podCreationTimestamp="2024-12-13 01:58:07 +0000 UTC" firstStartedPulling="2024-12-13 01:58:08.213881754 +0000 UTC m=+22.928061406" lastFinishedPulling="2024-12-13 01:58:18.509599204 +0000 UTC m=+33.223778848" observedRunningTime="2024-12-13 01:58:19.467270888 +0000 UTC m=+34.181450533" watchObservedRunningTime="2024-12-13 01:58:19.467467356 +0000 UTC m=+34.181647002"
Dec 13 01:58:19.908687 kernel: bpftool[4888]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Dec 13 01:58:20.057806 systemd-networkd[1608]: vxlan.calico: Link UP
Dec 13 01:58:20.057809 systemd-networkd[1608]: vxlan.calico: Gained carrier
Dec 13 01:58:21.464904 systemd-networkd[1608]: vxlan.calico: Gained IPv6LL
Dec 13 01:58:26.348860 containerd[1821]: time="2024-12-13T01:58:26.348744256Z" level=info msg="StopPodSandbox for \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\""
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.404 [INFO][5043] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.404 [INFO][5043] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" iface="eth0" netns="/var/run/netns/cni-a3fe5f40-9dc8-7d55-2e25-6b803fd468f5"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.405 [INFO][5043] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" iface="eth0" netns="/var/run/netns/cni-a3fe5f40-9dc8-7d55-2e25-6b803fd468f5"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.405 [INFO][5043] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" iface="eth0" netns="/var/run/netns/cni-a3fe5f40-9dc8-7d55-2e25-6b803fd468f5"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.405 [INFO][5043] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.405 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.449 [INFO][5056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.450 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.450 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.455 [WARNING][5056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.455 [INFO][5056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.457 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:26.460489 containerd[1821]: 2024-12-13 01:58:26.459 [INFO][5043] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:26.461201 containerd[1821]: time="2024-12-13T01:58:26.460561502Z" level=info msg="TearDown network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\" successfully"
Dec 13 01:58:26.461201 containerd[1821]: time="2024-12-13T01:58:26.460586312Z" level=info msg="StopPodSandbox for \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\" returns successfully"
Dec 13 01:58:26.461201 containerd[1821]: time="2024-12-13T01:58:26.461125753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d9fc96-j42rn,Uid:7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7,Namespace:calico-system,Attempt:1,}"
Dec 13 01:58:26.462385 systemd[1]: run-netns-cni\x2da3fe5f40\x2d9dc8\x2d7d55\x2d2e25\x2d6b803fd468f5.mount: Deactivated successfully.
Dec 13 01:58:26.517243 systemd-networkd[1608]: cali6bcd2f33a39: Link UP
Dec 13 01:58:26.517434 systemd-networkd[1608]: cali6bcd2f33a39: Gained carrier
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.481 [INFO][5072] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0 calico-kube-controllers-6bb7d9fc96- calico-system  7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7 764 0 2024-12-13 01:58:08 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bb7d9fc96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  ci-4081.2.1-a-5a9deb00aa  calico-kube-controllers-6bb7d9fc96-j42rn eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6bcd2f33a39  [] []}} ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.481 [INFO][5072] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.495 [INFO][5091] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" HandleID="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.500 [INFO][5091] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" HandleID="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f8040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-5a9deb00aa", "pod":"calico-kube-controllers-6bb7d9fc96-j42rn", "timestamp":"2024-12-13 01:58:26.495241501 +0000 UTC"}, Hostname:"ci-4081.2.1-a-5a9deb00aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.500 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.500 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.500 [INFO][5091] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-5a9deb00aa'
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.501 [INFO][5091] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.503 [INFO][5091] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.506 [INFO][5091] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.508 [INFO][5091] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.509 [INFO][5091] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.509 [INFO][5091] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.510 [INFO][5091] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.512 [INFO][5091] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.514 [INFO][5091] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.65/26] block=192.168.42.64/26 handle="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.514 [INFO][5091] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.65/26] handle="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.514 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:26.523671 containerd[1821]: 2024-12-13 01:58:26.514 [INFO][5091] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.65/26] IPv6=[] ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" HandleID="k8s-pod-network.3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.524385 containerd[1821]: 2024-12-13 01:58:26.516 [INFO][5072] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0", GenerateName:"calico-kube-controllers-6bb7d9fc96-", Namespace:"calico-system", SelfLink:"", UID:"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d9fc96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"", Pod:"calico-kube-controllers-6bb7d9fc96-j42rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bcd2f33a39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:26.524385 containerd[1821]: 2024-12-13 01:58:26.516 [INFO][5072] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.65/32] ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.524385 containerd[1821]: 2024-12-13 01:58:26.516 [INFO][5072] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bcd2f33a39 ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.524385 containerd[1821]: 2024-12-13 01:58:26.517 [INFO][5072] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.524385 containerd[1821]: 2024-12-13 01:58:26.517 [INFO][5072] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0", GenerateName:"calico-kube-controllers-6bb7d9fc96-", Namespace:"calico-system", SelfLink:"", UID:"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d9fc96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839", Pod:"calico-kube-controllers-6bb7d9fc96-j42rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bcd2f33a39", MAC:"46:91:78:64:bf:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:26.524385 containerd[1821]: 2024-12-13 01:58:26.522 [INFO][5072] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839" Namespace="calico-system" Pod="calico-kube-controllers-6bb7d9fc96-j42rn" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:26.533440 containerd[1821]: time="2024-12-13T01:58:26.533355064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:26.533440 containerd[1821]: time="2024-12-13T01:58:26.533395366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:26.533440 containerd[1821]: time="2024-12-13T01:58:26.533413199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:26.533742 containerd[1821]: time="2024-12-13T01:58:26.533677817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:26.551809 systemd[1]: Started cri-containerd-3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839.scope - libcontainer container 3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839.
Dec 13 01:58:26.576420 containerd[1821]: time="2024-12-13T01:58:26.576396525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb7d9fc96-j42rn,Uid:7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839\""
Dec 13 01:58:26.577152 containerd[1821]: time="2024-12-13T01:58:26.577138868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Dec 13 01:58:28.236413 containerd[1821]: time="2024-12-13T01:58:28.236360470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:28.236635 containerd[1821]: time="2024-12-13T01:58:28.236580386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192"
Dec 13 01:58:28.236943 containerd[1821]: time="2024-12-13T01:58:28.236901675Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:28.237914 containerd[1821]: time="2024-12-13T01:58:28.237873713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:28.238342 containerd[1821]: time="2024-12-13T01:58:28.238297795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.661138572s"
Dec 13 01:58:28.238342 containerd[1821]: time="2024-12-13T01:58:28.238315538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\""
Dec 13 01:58:28.241701 containerd[1821]: time="2024-12-13T01:58:28.241649463Z" level=info msg="CreateContainer within sandbox \"3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Dec 13 01:58:28.246208 containerd[1821]: time="2024-12-13T01:58:28.246161993Z" level=info msg="CreateContainer within sandbox \"3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b3ce07cc6989b422782b135697d324dd3bb14243ec9d35208a84dbc625ccd318\""
Dec 13 01:58:28.246431 containerd[1821]: time="2024-12-13T01:58:28.246404308Z" level=info msg="StartContainer for \"b3ce07cc6989b422782b135697d324dd3bb14243ec9d35208a84dbc625ccd318\""
Dec 13 01:58:28.277940 systemd[1]: Started cri-containerd-b3ce07cc6989b422782b135697d324dd3bb14243ec9d35208a84dbc625ccd318.scope - libcontainer container b3ce07cc6989b422782b135697d324dd3bb14243ec9d35208a84dbc625ccd318.
Dec 13 01:58:28.302730 containerd[1821]: time="2024-12-13T01:58:28.302673027Z" level=info msg="StartContainer for \"b3ce07cc6989b422782b135697d324dd3bb14243ec9d35208a84dbc625ccd318\" returns successfully"
Dec 13 01:58:28.348125 containerd[1821]: time="2024-12-13T01:58:28.348077745Z" level=info msg="StopPodSandbox for \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\""
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.375 [INFO][5229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.375 [INFO][5229] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" iface="eth0" netns="/var/run/netns/cni-36145940-59d6-9c00-83e7-6dc29e7827ab"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.375 [INFO][5229] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" iface="eth0" netns="/var/run/netns/cni-36145940-59d6-9c00-83e7-6dc29e7827ab"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.375 [INFO][5229] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" iface="eth0" netns="/var/run/netns/cni-36145940-59d6-9c00-83e7-6dc29e7827ab"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.375 [INFO][5229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.375 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.388 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.389 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.389 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.393 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.393 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.394 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:28.395895 containerd[1821]: 2024-12-13 01:58:28.395 [INFO][5229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:28.396264 containerd[1821]: time="2024-12-13T01:58:28.395946001Z" level=info msg="TearDown network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\" successfully"
Dec 13 01:58:28.396264 containerd[1821]: time="2024-12-13T01:58:28.395964467Z" level=info msg="StopPodSandbox for \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\" returns successfully"
Dec 13 01:58:28.396391 containerd[1821]: time="2024-12-13T01:58:28.396357151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-fjmw4,Uid:a97e1d12-a7fb-4125-b6e6-7d31835664c3,Namespace:calico-apiserver,Attempt:1,}"
Dec 13 01:58:28.467773 systemd-networkd[1608]: cali535a22f9db8: Link UP
Dec 13 01:58:28.468575 systemd-networkd[1608]: cali535a22f9db8: Gained carrier
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.416 [INFO][5258] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0 calico-apiserver-67488db8c5- calico-apiserver  a97e1d12-a7fb-4125-b6e6-7d31835664c3 779 0 2024-12-13 01:58:07 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67488db8c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-4081.2.1-a-5a9deb00aa  calico-apiserver-67488db8c5-fjmw4 eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali535a22f9db8  [] []}} ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.416 [INFO][5258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.430 [INFO][5281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" HandleID="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.434 [INFO][5281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" HandleID="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c4000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-5a9deb00aa", "pod":"calico-apiserver-67488db8c5-fjmw4", "timestamp":"2024-12-13 01:58:28.43001195 +0000 UTC"}, Hostname:"ci-4081.2.1-a-5a9deb00aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.434 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.434 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.434 [INFO][5281] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-5a9deb00aa'
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.435 [INFO][5281] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.438 [INFO][5281] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.440 [INFO][5281] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.442 [INFO][5281] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.443 [INFO][5281] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.443 [INFO][5281] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.444 [INFO][5281] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.448 [INFO][5281] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.459 [INFO][5281] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.66/26] block=192.168.42.64/26 handle="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.459 [INFO][5281] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.66/26] handle="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.460 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:28.492326 containerd[1821]: 2024-12-13 01:58:28.460 [INFO][5281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.66/26] IPv6=[] ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" HandleID="k8s-pod-network.6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.494460 containerd[1821]: 2024-12-13 01:58:28.464 [INFO][5258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a97e1d12-a7fb-4125-b6e6-7d31835664c3", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"", Pod:"calico-apiserver-67488db8c5-fjmw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali535a22f9db8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:28.494460 containerd[1821]: 2024-12-13 01:58:28.464 [INFO][5258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.66/32] ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.494460 containerd[1821]: 2024-12-13 01:58:28.464 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali535a22f9db8 ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.494460 containerd[1821]: 2024-12-13 01:58:28.468 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.494460 containerd[1821]: 2024-12-13 01:58:28.469 [INFO][5258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a97e1d12-a7fb-4125-b6e6-7d31835664c3", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a", Pod:"calico-apiserver-67488db8c5-fjmw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali535a22f9db8", MAC:"56:ab:11:aa:b7:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:28.494460 containerd[1821]: 2024-12-13 01:58:28.487 [INFO][5258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-fjmw4" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:28.499224 kubelet[3241]: I1213 01:58:28.499174    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bb7d9fc96-j42rn" podStartSLOduration=18.837631221 podStartE2EDuration="20.499096588s" podCreationTimestamp="2024-12-13 01:58:08 +0000 UTC" firstStartedPulling="2024-12-13 01:58:26.576996318 +0000 UTC m=+41.291175963" lastFinishedPulling="2024-12-13 01:58:28.238461686 +0000 UTC m=+42.952641330" observedRunningTime="2024-12-13 01:58:28.498383663 +0000 UTC m=+43.212563357" watchObservedRunningTime="2024-12-13 01:58:28.499096588 +0000 UTC m=+43.213276261"
Dec 13 01:58:28.508515 containerd[1821]: time="2024-12-13T01:58:28.508474006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:28.508515 containerd[1821]: time="2024-12-13T01:58:28.508504230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:28.508515 containerd[1821]: time="2024-12-13T01:58:28.508511257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:28.508634 containerd[1821]: time="2024-12-13T01:58:28.508556129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:28.535858 systemd[1]: Started cri-containerd-6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a.scope - libcontainer container 6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a.
Dec 13 01:58:28.562510 containerd[1821]: time="2024-12-13T01:58:28.562486208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-fjmw4,Uid:a97e1d12-a7fb-4125-b6e6-7d31835664c3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a\""
Dec 13 01:58:28.563223 containerd[1821]: time="2024-12-13T01:58:28.563208358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Dec 13 01:58:28.568688 systemd-networkd[1608]: cali6bcd2f33a39: Gained IPv6LL
Dec 13 01:58:29.247161 systemd[1]: run-netns-cni\x2d36145940\x2d59d6\x2d9c00\x2d83e7\x2d6dc29e7827ab.mount: Deactivated successfully.
Dec 13 01:58:29.348138 containerd[1821]: time="2024-12-13T01:58:29.348112967Z" level=info msg="StopPodSandbox for \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\""
Dec 13 01:58:29.348371 containerd[1821]: time="2024-12-13T01:58:29.348113176Z" level=info msg="StopPodSandbox for \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\""
Dec 13 01:58:29.348371 containerd[1821]: time="2024-12-13T01:58:29.348146690Z" level=info msg="StopPodSandbox for \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\""
Dec 13 01:58:29.348371 containerd[1821]: time="2024-12-13T01:58:29.348258578Z" level=info msg="StopPodSandbox for \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\""
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.372 [INFO][5410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.372 [INFO][5410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" iface="eth0" netns="/var/run/netns/cni-1b1c1209-c296-b2bd-d9f1-e6b690057c7f"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.372 [INFO][5410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" iface="eth0" netns="/var/run/netns/cni-1b1c1209-c296-b2bd-d9f1-e6b690057c7f"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.372 [INFO][5410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" iface="eth0" netns="/var/run/netns/cni-1b1c1209-c296-b2bd-d9f1-e6b690057c7f"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.372 [INFO][5410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.372 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.382 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.387 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.387 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.387 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.389281 containerd[1821]: 2024-12-13 01:58:29.388 [INFO][5410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:29.389619 containerd[1821]: time="2024-12-13T01:58:29.389346282Z" level=info msg="TearDown network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\" successfully"
Dec 13 01:58:29.389619 containerd[1821]: time="2024-12-13T01:58:29.389363567Z" level=info msg="StopPodSandbox for \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\" returns successfully"
Dec 13 01:58:29.389740 containerd[1821]: time="2024-12-13T01:58:29.389723924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6dmq,Uid:7ed7e9e0-19d8-475e-a8ae-40451cd7fa24,Namespace:kube-system,Attempt:1,}"
Dec 13 01:58:29.391082 systemd[1]: run-netns-cni\x2d1b1c1209\x2dc296\x2db2bd\x2dd9f1\x2de6b690057c7f.mount: Deactivated successfully.
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" iface="eth0" netns="/var/run/netns/cni-982dec72-0d29-6f27-654f-7dffd3ddf390"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" iface="eth0" netns="/var/run/netns/cni-982dec72-0d29-6f27-654f-7dffd3ddf390"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" iface="eth0" netns="/var/run/netns/cni-982dec72-0d29-6f27-654f-7dffd3ddf390"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.388 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.391 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.391 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.392 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.394172 containerd[1821]: 2024-12-13 01:58:29.393 [INFO][5409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:29.394602 containerd[1821]: time="2024-12-13T01:58:29.394247335Z" level=info msg="TearDown network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\" successfully"
Dec 13 01:58:29.394602 containerd[1821]: time="2024-12-13T01:58:29.394261051Z" level=info msg="StopPodSandbox for \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\" returns successfully"
Dec 13 01:58:29.394602 containerd[1821]: time="2024-12-13T01:58:29.394567038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-872x9,Uid:2554f98a-0bc4-4c51-ab0a-a4257428bec4,Namespace:calico-apiserver,Attempt:1,}"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.373 [INFO][5408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.373 [INFO][5408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" iface="eth0" netns="/var/run/netns/cni-2f8711f9-5704-eb4a-769e-043859b73cdc"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.373 [INFO][5408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" iface="eth0" netns="/var/run/netns/cni-2f8711f9-5704-eb4a-769e-043859b73cdc"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.373 [INFO][5408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" iface="eth0" netns="/var/run/netns/cni-2f8711f9-5704-eb4a-769e-043859b73cdc"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.373 [INFO][5408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.373 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.392 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.396 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.396 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.397 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.398577 containerd[1821]: 2024-12-13 01:58:29.397 [INFO][5408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:29.398851 containerd[1821]: time="2024-12-13T01:58:29.398652355Z" level=info msg="TearDown network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\" successfully"
Dec 13 01:58:29.398851 containerd[1821]: time="2024-12-13T01:58:29.398667057Z" level=info msg="StopPodSandbox for \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\" returns successfully"
Dec 13 01:58:29.398905 systemd[1]: run-netns-cni\x2d982dec72\x2d0d29\x2d6f27\x2d654f\x2d7dffd3ddf390.mount: Deactivated successfully.
Dec 13 01:58:29.399130 containerd[1821]: time="2024-12-13T01:58:29.399096135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xqxt6,Uid:d92b1d5f-d865-4f0e-9a3d-e2c1434149e2,Namespace:kube-system,Attempt:1,}"
Dec 13 01:58:29.402979 systemd[1]: run-netns-cni\x2d2f8711f9\x2d5704\x2deb4a\x2d769e\x2d043859b73cdc.mount: Deactivated successfully.
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5411] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" iface="eth0" netns="/var/run/netns/cni-7b2345b5-1526-4739-d0b6-ccc4b4637fae"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" iface="eth0" netns="/var/run/netns/cni-7b2345b5-1526-4739-d0b6-ccc4b4637fae"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" iface="eth0" netns="/var/run/netns/cni-7b2345b5-1526-4739-d0b6-ccc4b4637fae"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5411] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.370 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.383 [INFO][5466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.397 [INFO][5466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.401 [WARNING][5466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.401 [INFO][5466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.402 [INFO][5466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.404340 containerd[1821]: 2024-12-13 01:58:29.403 [INFO][5411] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:29.404610 containerd[1821]: time="2024-12-13T01:58:29.404407835Z" level=info msg="TearDown network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\" successfully"
Dec 13 01:58:29.404610 containerd[1821]: time="2024-12-13T01:58:29.404422219Z" level=info msg="StopPodSandbox for \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\" returns successfully"
Dec 13 01:58:29.404888 containerd[1821]: time="2024-12-13T01:58:29.404872136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb2fq,Uid:d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1,Namespace:calico-system,Attempt:1,}"
Dec 13 01:58:29.450908 systemd-networkd[1608]: cali163526916b0: Link UP
Dec 13 01:58:29.451042 systemd-networkd[1608]: cali163526916b0: Gained carrier
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.413 [INFO][5526] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0 coredns-76f75df574- kube-system  7ed7e9e0-19d8-475e-a8ae-40451cd7fa24 793 0 2024-12-13 01:58:00 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-4081.2.1-a-5a9deb00aa  coredns-76f75df574-z6dmq eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali163526916b0  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.413 [INFO][5526] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.428 [INFO][5603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" HandleID="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.433 [INFO][5603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" HandleID="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000423a80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-5a9deb00aa", "pod":"coredns-76f75df574-z6dmq", "timestamp":"2024-12-13 01:58:29.428472067 +0000 UTC"}, Hostname:"ci-4081.2.1-a-5a9deb00aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.433 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.433 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.433 [INFO][5603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-5a9deb00aa'
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.435 [INFO][5603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.437 [INFO][5603] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.440 [INFO][5603] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.441 [INFO][5603] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.443 [INFO][5603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.443 [INFO][5603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.443 [INFO][5603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.446 [INFO][5603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.448 [INFO][5603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.67/26] block=192.168.42.64/26 handle="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.448 [INFO][5603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.67/26] handle="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.448 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.456127 containerd[1821]: 2024-12-13 01:58:29.448 [INFO][5603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.67/26] IPv6=[] ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" HandleID="k8s-pod-network.1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.456599 containerd[1821]: 2024-12-13 01:58:29.449 [INFO][5526] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"", Pod:"coredns-76f75df574-z6dmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali163526916b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.456599 containerd[1821]: 2024-12-13 01:58:29.449 [INFO][5526] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.67/32] ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.456599 containerd[1821]: 2024-12-13 01:58:29.449 [INFO][5526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali163526916b0 ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.456599 containerd[1821]: 2024-12-13 01:58:29.451 [INFO][5526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.456599 containerd[1821]: 2024-12-13 01:58:29.451 [INFO][5526] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec", Pod:"coredns-76f75df574-z6dmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali163526916b0", MAC:"9e:a7:ec:ad:13:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.456599 containerd[1821]: 2024-12-13 01:58:29.455 [INFO][5526] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec" Namespace="kube-system" Pod="coredns-76f75df574-z6dmq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:29.465313 systemd-networkd[1608]: cali549c1c91566: Link UP
Dec 13 01:58:29.465448 systemd-networkd[1608]: cali549c1c91566: Gained carrier
Dec 13 01:58:29.466359 containerd[1821]: time="2024-12-13T01:58:29.466306811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:29.466589 containerd[1821]: time="2024-12-13T01:58:29.466567338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:29.466589 containerd[1821]: time="2024-12-13T01:58:29.466583220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.466691 containerd[1821]: time="2024-12-13T01:58:29.466649416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.415 [INFO][5537] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0 calico-apiserver-67488db8c5- calico-apiserver  2554f98a-0bc4-4c51-ab0a-a4257428bec4 792 0 2024-12-13 01:58:07 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67488db8c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-4081.2.1-a-5a9deb00aa  calico-apiserver-67488db8c5-872x9 eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali549c1c91566  [] []}} ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.415 [INFO][5537] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.432 [INFO][5617] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" HandleID="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.436 [INFO][5617] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" HandleID="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000503e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-5a9deb00aa", "pod":"calico-apiserver-67488db8c5-872x9", "timestamp":"2024-12-13 01:58:29.43274386 +0000 UTC"}, Hostname:"ci-4081.2.1-a-5a9deb00aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.437 [INFO][5617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.448 [INFO][5617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.448 [INFO][5617] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-5a9deb00aa'
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.449 [INFO][5617] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.452 [INFO][5617] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.455 [INFO][5617] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.456 [INFO][5617] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.457 [INFO][5617] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.457 [INFO][5617] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.458 [INFO][5617] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.460 [INFO][5617] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.463 [INFO][5617] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.68/26] block=192.168.42.64/26 handle="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.463 [INFO][5617] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.68/26] handle="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.463 [INFO][5617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.470783 containerd[1821]: 2024-12-13 01:58:29.463 [INFO][5617] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.68/26] IPv6=[] ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" HandleID="k8s-pod-network.72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.471328 containerd[1821]: 2024-12-13 01:58:29.464 [INFO][5537] cni-plugin/k8s.go 386: Populated endpoint ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2554f98a-0bc4-4c51-ab0a-a4257428bec4", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"", Pod:"calico-apiserver-67488db8c5-872x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali549c1c91566", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.471328 containerd[1821]: 2024-12-13 01:58:29.464 [INFO][5537] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.68/32] ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.471328 containerd[1821]: 2024-12-13 01:58:29.464 [INFO][5537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali549c1c91566 ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.471328 containerd[1821]: 2024-12-13 01:58:29.465 [INFO][5537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.471328 containerd[1821]: 2024-12-13 01:58:29.465 [INFO][5537] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2554f98a-0bc4-4c51-ab0a-a4257428bec4", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078", Pod:"calico-apiserver-67488db8c5-872x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali549c1c91566", MAC:"8a:3f:32:39:85:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.471328 containerd[1821]: 2024-12-13 01:58:29.470 [INFO][5537] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078" Namespace="calico-apiserver" Pod="calico-apiserver-67488db8c5-872x9" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:29.476933 kubelet[3241]: I1213 01:58:29.476918    3241 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 01:58:29.480964 containerd[1821]: time="2024-12-13T01:58:29.480709724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:29.480964 containerd[1821]: time="2024-12-13T01:58:29.480946869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:29.480964 containerd[1821]: time="2024-12-13T01:58:29.480954191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.481094 containerd[1821]: time="2024-12-13T01:58:29.481046409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.482333 systemd-networkd[1608]: cali17e2b6cc748: Link UP
Dec 13 01:58:29.482483 systemd-networkd[1608]: cali17e2b6cc748: Gained carrier
Dec 13 01:58:29.482795 systemd[1]: Started cri-containerd-1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec.scope - libcontainer container 1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec.
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.422 [INFO][5559] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0 coredns-76f75df574- kube-system  d92b1d5f-d865-4f0e-9a3d-e2c1434149e2 794 0 2024-12-13 01:58:00 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-4081.2.1-a-5a9deb00aa  coredns-76f75df574-xqxt6 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali17e2b6cc748  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.422 [INFO][5559] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.436 [INFO][5626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" HandleID="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.440 [INFO][5626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" HandleID="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f7610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-5a9deb00aa", "pod":"coredns-76f75df574-xqxt6", "timestamp":"2024-12-13 01:58:29.436955921 +0000 UTC"}, Hostname:"ci-4081.2.1-a-5a9deb00aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.441 [INFO][5626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.463 [INFO][5626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.463 [INFO][5626] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-5a9deb00aa'
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.464 [INFO][5626] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.467 [INFO][5626] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.470 [INFO][5626] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.471 [INFO][5626] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.472 [INFO][5626] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.472 [INFO][5626] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.474 [INFO][5626] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.476 [INFO][5626] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.480 [INFO][5626] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.69/26] block=192.168.42.64/26 handle="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.480 [INFO][5626] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.69/26] handle="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.480 [INFO][5626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.488082 containerd[1821]: 2024-12-13 01:58:29.480 [INFO][5626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.69/26] IPv6=[] ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" HandleID="k8s-pod-network.2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.488754 containerd[1821]: 2024-12-13 01:58:29.481 [INFO][5559] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"", Pod:"coredns-76f75df574-xqxt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e2b6cc748", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.488754 containerd[1821]: 2024-12-13 01:58:29.481 [INFO][5559] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.69/32] ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.488754 containerd[1821]: 2024-12-13 01:58:29.481 [INFO][5559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17e2b6cc748 ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.488754 containerd[1821]: 2024-12-13 01:58:29.482 [INFO][5559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.488754 containerd[1821]: 2024-12-13 01:58:29.482 [INFO][5559] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944", Pod:"coredns-76f75df574-xqxt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e2b6cc748", MAC:"ce:c1:00:a7:bd:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.488754 containerd[1821]: 2024-12-13 01:58:29.487 [INFO][5559] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944" Namespace="kube-system" Pod="coredns-76f75df574-xqxt6" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:29.488840 systemd[1]: Started cri-containerd-72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078.scope - libcontainer container 72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078.
Dec 13 01:58:29.498717 containerd[1821]: time="2024-12-13T01:58:29.498559765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:29.498717 containerd[1821]: time="2024-12-13T01:58:29.498591528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:29.498717 containerd[1821]: time="2024-12-13T01:58:29.498598501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.498717 containerd[1821]: time="2024-12-13T01:58:29.498678944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.501361 systemd-networkd[1608]: cali3472119922e: Link UP
Dec 13 01:58:29.501541 systemd-networkd[1608]: cali3472119922e: Gained carrier
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.425 [INFO][5578] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0 csi-node-driver- calico-system  d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1 791 0 2024-12-13 01:58:08 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  ci-4081.2.1-a-5a9deb00aa  csi-node-driver-mb2fq eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali3472119922e  [] []}} ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.425 [INFO][5578] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.439 [INFO][5640] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" HandleID="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.443 [INFO][5640] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" HandleID="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c7080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-5a9deb00aa", "pod":"csi-node-driver-mb2fq", "timestamp":"2024-12-13 01:58:29.439337429 +0000 UTC"}, Hostname:"ci-4081.2.1-a-5a9deb00aa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.443 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.480 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.480 [INFO][5640] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-5a9deb00aa'
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.482 [INFO][5640] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.485 [INFO][5640] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.488 [INFO][5640] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.489 [INFO][5640] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.491 [INFO][5640] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.491 [INFO][5640] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.492 [INFO][5640] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.494 [INFO][5640] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.498 [INFO][5640] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.70/26] block=192.168.42.64/26 handle="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.498 [INFO][5640] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.70/26] handle="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" host="ci-4081.2.1-a-5a9deb00aa"
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.498 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:29.507314 containerd[1821]: 2024-12-13 01:58:29.498 [INFO][5640] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.70/26] IPv6=[] ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" HandleID="k8s-pod-network.10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.507753 containerd[1821]: 2024-12-13 01:58:29.500 [INFO][5578] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"", Pod:"csi-node-driver-mb2fq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3472119922e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.507753 containerd[1821]: 2024-12-13 01:58:29.500 [INFO][5578] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.70/32] ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.507753 containerd[1821]: 2024-12-13 01:58:29.500 [INFO][5578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3472119922e ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.507753 containerd[1821]: 2024-12-13 01:58:29.501 [INFO][5578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.507753 containerd[1821]: 2024-12-13 01:58:29.501 [INFO][5578] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86", Pod:"csi-node-driver-mb2fq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3472119922e", MAC:"fa:05:da:b9:be:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:29.507753 containerd[1821]: 2024-12-13 01:58:29.506 [INFO][5578] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86" Namespace="calico-system" Pod="csi-node-driver-mb2fq" WorkloadEndpoint="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:29.517850 systemd[1]: Started cri-containerd-2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944.scope - libcontainer container 2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944.
Dec 13 01:58:29.518105 containerd[1821]: time="2024-12-13T01:58:29.518082925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6dmq,Uid:7ed7e9e0-19d8-475e-a8ae-40451cd7fa24,Namespace:kube-system,Attempt:1,} returns sandbox id \"1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec\""
Dec 13 01:58:29.519754 containerd[1821]: time="2024-12-13T01:58:29.519723568Z" level=info msg="CreateContainer within sandbox \"1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 01:58:29.522106 containerd[1821]: time="2024-12-13T01:58:29.522088072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67488db8c5-872x9,Uid:2554f98a-0bc4-4c51-ab0a-a4257428bec4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078\""
Dec 13 01:58:29.526689 containerd[1821]: time="2024-12-13T01:58:29.526669037Z" level=info msg="CreateContainer within sandbox \"1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22b9cc48afb81f2425d3e7b114cc03931f3f74284722b42d6e68687c17d32978\""
Dec 13 01:58:29.526937 containerd[1821]: time="2024-12-13T01:58:29.526925564Z" level=info msg="StartContainer for \"22b9cc48afb81f2425d3e7b114cc03931f3f74284722b42d6e68687c17d32978\""
Dec 13 01:58:29.529048 containerd[1821]: time="2024-12-13T01:58:29.528829746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 01:58:29.529048 containerd[1821]: time="2024-12-13T01:58:29.529041641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 01:58:29.529133 containerd[1821]: time="2024-12-13T01:58:29.529050244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.529133 containerd[1821]: time="2024-12-13T01:58:29.529098529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 01:58:29.551817 systemd[1]: Started cri-containerd-10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86.scope - libcontainer container 10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86.
Dec 13 01:58:29.553399 systemd[1]: Started cri-containerd-22b9cc48afb81f2425d3e7b114cc03931f3f74284722b42d6e68687c17d32978.scope - libcontainer container 22b9cc48afb81f2425d3e7b114cc03931f3f74284722b42d6e68687c17d32978.
Dec 13 01:58:29.556172 containerd[1821]: time="2024-12-13T01:58:29.556148352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xqxt6,Uid:d92b1d5f-d865-4f0e-9a3d-e2c1434149e2,Namespace:kube-system,Attempt:1,} returns sandbox id \"2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944\""
Dec 13 01:58:29.557492 containerd[1821]: time="2024-12-13T01:58:29.557477705Z" level=info msg="CreateContainer within sandbox \"2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Dec 13 01:58:29.562195 containerd[1821]: time="2024-12-13T01:58:29.562169631Z" level=info msg="CreateContainer within sandbox \"2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e50c67a94263dd4177788721f63029fef78d2a6d5d7ba9445a731090e58eef7c\""
Dec 13 01:58:29.562467 containerd[1821]: time="2024-12-13T01:58:29.562456230Z" level=info msg="StartContainer for \"e50c67a94263dd4177788721f63029fef78d2a6d5d7ba9445a731090e58eef7c\""
Dec 13 01:58:29.562717 containerd[1821]: time="2024-12-13T01:58:29.562704351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mb2fq,Uid:d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1,Namespace:calico-system,Attempt:1,} returns sandbox id \"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86\""
Dec 13 01:58:29.564639 containerd[1821]: time="2024-12-13T01:58:29.564620003Z" level=info msg="StartContainer for \"22b9cc48afb81f2425d3e7b114cc03931f3f74284722b42d6e68687c17d32978\" returns successfully"
Dec 13 01:58:29.572163 systemd[1]: Started cri-containerd-e50c67a94263dd4177788721f63029fef78d2a6d5d7ba9445a731090e58eef7c.scope - libcontainer container e50c67a94263dd4177788721f63029fef78d2a6d5d7ba9445a731090e58eef7c.
Dec 13 01:58:29.584397 containerd[1821]: time="2024-12-13T01:58:29.584374789Z" level=info msg="StartContainer for \"e50c67a94263dd4177788721f63029fef78d2a6d5d7ba9445a731090e58eef7c\" returns successfully"
Dec 13 01:58:30.040717 systemd-networkd[1608]: cali535a22f9db8: Gained IPv6LL
Dec 13 01:58:30.235066 containerd[1821]: time="2024-12-13T01:58:30.235014233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:30.235246 containerd[1821]: time="2024-12-13T01:58:30.235226470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404"
Dec 13 01:58:30.235543 containerd[1821]: time="2024-12-13T01:58:30.235529805Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:30.236549 containerd[1821]: time="2024-12-13T01:58:30.236536398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:30.236974 containerd[1821]: time="2024-12-13T01:58:30.236956668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.673729493s"
Dec 13 01:58:30.237030 containerd[1821]: time="2024-12-13T01:58:30.236974470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\""
Dec 13 01:58:30.237297 containerd[1821]: time="2024-12-13T01:58:30.237284227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Dec 13 01:58:30.237902 containerd[1821]: time="2024-12-13T01:58:30.237888886Z" level=info msg="CreateContainer within sandbox \"6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Dec 13 01:58:30.241630 containerd[1821]: time="2024-12-13T01:58:30.241616056Z" level=info msg="CreateContainer within sandbox \"6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c74c1cd29de063a2d49b83808d063fa236a1de31749f269ad2d33600a163ab58\""
Dec 13 01:58:30.241862 containerd[1821]: time="2024-12-13T01:58:30.241849023Z" level=info msg="StartContainer for \"c74c1cd29de063a2d49b83808d063fa236a1de31749f269ad2d33600a163ab58\""
Dec 13 01:58:30.245826 systemd[1]: run-netns-cni\x2d7b2345b5\x2d1526\x2d4739\x2dd0b6\x2dccc4b4637fae.mount: Deactivated successfully.
Dec 13 01:58:30.273769 systemd[1]: Started cri-containerd-c74c1cd29de063a2d49b83808d063fa236a1de31749f269ad2d33600a163ab58.scope - libcontainer container c74c1cd29de063a2d49b83808d063fa236a1de31749f269ad2d33600a163ab58.
Dec 13 01:58:30.296373 containerd[1821]: time="2024-12-13T01:58:30.296283962Z" level=info msg="StartContainer for \"c74c1cd29de063a2d49b83808d063fa236a1de31749f269ad2d33600a163ab58\" returns successfully"
Dec 13 01:58:30.485639 kubelet[3241]: I1213 01:58:30.485601    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-z6dmq" podStartSLOduration=30.485572551 podStartE2EDuration="30.485572551s" podCreationTimestamp="2024-12-13 01:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:30.485436589 +0000 UTC m=+45.199616234" watchObservedRunningTime="2024-12-13 01:58:30.485572551 +0000 UTC m=+45.199752192"
Dec 13 01:58:30.490929 kubelet[3241]: I1213 01:58:30.490910    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67488db8c5-fjmw4" podStartSLOduration=21.816789698 podStartE2EDuration="23.490883075s" podCreationTimestamp="2024-12-13 01:58:07 +0000 UTC" firstStartedPulling="2024-12-13 01:58:28.563083231 +0000 UTC m=+43.277262878" lastFinishedPulling="2024-12-13 01:58:30.23717661 +0000 UTC m=+44.951356255" observedRunningTime="2024-12-13 01:58:30.490606027 +0000 UTC m=+45.204785671" watchObservedRunningTime="2024-12-13 01:58:30.490883075 +0000 UTC m=+45.205062717"
Dec 13 01:58:30.496197 kubelet[3241]: I1213 01:58:30.496177    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xqxt6" podStartSLOduration=30.49614311 podStartE2EDuration="30.49614311s" podCreationTimestamp="2024-12-13 01:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:30.495841418 +0000 UTC m=+45.210021064" watchObservedRunningTime="2024-12-13 01:58:30.49614311 +0000 UTC m=+45.210322752"
Dec 13 01:58:30.604369 containerd[1821]: time="2024-12-13T01:58:30.604347176Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:30.604616 containerd[1821]: time="2024-12-13T01:58:30.604591450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77"
Dec 13 01:58:30.606768 containerd[1821]: time="2024-12-13T01:58:30.606748199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 369.446878ms"
Dec 13 01:58:30.606768 containerd[1821]: time="2024-12-13T01:58:30.606767215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\""
Dec 13 01:58:30.607109 containerd[1821]: time="2024-12-13T01:58:30.607094477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Dec 13 01:58:30.607624 containerd[1821]: time="2024-12-13T01:58:30.607604141Z" level=info msg="CreateContainer within sandbox \"72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Dec 13 01:58:30.611420 containerd[1821]: time="2024-12-13T01:58:30.611406900Z" level=info msg="CreateContainer within sandbox \"72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"87d99459b8bb89fadca9b58a4b53f2e1a7f25711424443a09823005aa66cd044\""
Dec 13 01:58:30.611655 containerd[1821]: time="2024-12-13T01:58:30.611640477Z" level=info msg="StartContainer for \"87d99459b8bb89fadca9b58a4b53f2e1a7f25711424443a09823005aa66cd044\""
Dec 13 01:58:30.632784 systemd[1]: Started cri-containerd-87d99459b8bb89fadca9b58a4b53f2e1a7f25711424443a09823005aa66cd044.scope - libcontainer container 87d99459b8bb89fadca9b58a4b53f2e1a7f25711424443a09823005aa66cd044.
Dec 13 01:58:30.655897 containerd[1821]: time="2024-12-13T01:58:30.655872931Z" level=info msg="StartContainer for \"87d99459b8bb89fadca9b58a4b53f2e1a7f25711424443a09823005aa66cd044\" returns successfully"
Dec 13 01:58:30.872791 systemd-networkd[1608]: cali163526916b0: Gained IPv6LL
Dec 13 01:58:31.000918 systemd-networkd[1608]: cali17e2b6cc748: Gained IPv6LL
Dec 13 01:58:31.320887 systemd-networkd[1608]: cali3472119922e: Gained IPv6LL
Dec 13 01:58:31.321862 systemd-networkd[1608]: cali549c1c91566: Gained IPv6LL
Dec 13 01:58:31.490295 kubelet[3241]: I1213 01:58:31.490269    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67488db8c5-872x9" podStartSLOduration=23.405938452 podStartE2EDuration="24.490231693s" podCreationTimestamp="2024-12-13 01:58:07 +0000 UTC" firstStartedPulling="2024-12-13 01:58:29.522659577 +0000 UTC m=+44.236839221" lastFinishedPulling="2024-12-13 01:58:30.606952817 +0000 UTC m=+45.321132462" observedRunningTime="2024-12-13 01:58:31.489989348 +0000 UTC m=+46.204169003" watchObservedRunningTime="2024-12-13 01:58:31.490231693 +0000 UTC m=+46.204411338"
Dec 13 01:58:31.818093 containerd[1821]: time="2024-12-13T01:58:31.818068138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:31.818397 containerd[1821]: time="2024-12-13T01:58:31.818225249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632"
Dec 13 01:58:31.818695 containerd[1821]: time="2024-12-13T01:58:31.818684507Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:31.819570 containerd[1821]: time="2024-12-13T01:58:31.819557772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:31.819992 containerd[1821]: time="2024-12-13T01:58:31.819980707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.212868995s"
Dec 13 01:58:31.820035 containerd[1821]: time="2024-12-13T01:58:31.819996307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\""
Dec 13 01:58:31.821038 containerd[1821]: time="2024-12-13T01:58:31.821026742Z" level=info msg="CreateContainer within sandbox \"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Dec 13 01:58:31.826121 containerd[1821]: time="2024-12-13T01:58:31.826107508Z" level=info msg="CreateContainer within sandbox \"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"67f3ac94bf65120ceb96cc25ee50b236a1aabf4df474a445417c54c67ea322cb\""
Dec 13 01:58:31.826447 containerd[1821]: time="2024-12-13T01:58:31.826434761Z" level=info msg="StartContainer for \"67f3ac94bf65120ceb96cc25ee50b236a1aabf4df474a445417c54c67ea322cb\""
Dec 13 01:58:31.855810 systemd[1]: Started cri-containerd-67f3ac94bf65120ceb96cc25ee50b236a1aabf4df474a445417c54c67ea322cb.scope - libcontainer container 67f3ac94bf65120ceb96cc25ee50b236a1aabf4df474a445417c54c67ea322cb.
Dec 13 01:58:31.870509 containerd[1821]: time="2024-12-13T01:58:31.870454200Z" level=info msg="StartContainer for \"67f3ac94bf65120ceb96cc25ee50b236a1aabf4df474a445417c54c67ea322cb\" returns successfully"
Dec 13 01:58:31.871201 containerd[1821]: time="2024-12-13T01:58:31.871182395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Dec 13 01:58:32.491034 kubelet[3241]: I1213 01:58:32.490925    3241 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 01:58:33.133667 containerd[1821]: time="2024-12-13T01:58:33.133642549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:33.133893 containerd[1821]: time="2024-12-13T01:58:33.133864862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081"
Dec 13 01:58:33.134210 containerd[1821]: time="2024-12-13T01:58:33.134196850Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:33.135155 containerd[1821]: time="2024-12-13T01:58:33.135143467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 13 01:58:33.135887 containerd[1821]: time="2024-12-13T01:58:33.135874120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.264668157s"
Dec 13 01:58:33.135913 containerd[1821]: time="2024-12-13T01:58:33.135890986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\""
Dec 13 01:58:33.136750 containerd[1821]: time="2024-12-13T01:58:33.136736524Z" level=info msg="CreateContainer within sandbox \"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Dec 13 01:58:33.141174 containerd[1821]: time="2024-12-13T01:58:33.141132570Z" level=info msg="CreateContainer within sandbox \"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cfd3434ac4bb22f553c27a5fa6b2ca7c6c3511c42c1e27b147d5328d31653dd7\""
Dec 13 01:58:33.141341 containerd[1821]: time="2024-12-13T01:58:33.141326919Z" level=info msg="StartContainer for \"cfd3434ac4bb22f553c27a5fa6b2ca7c6c3511c42c1e27b147d5328d31653dd7\""
Dec 13 01:58:33.170766 systemd[1]: Started cri-containerd-cfd3434ac4bb22f553c27a5fa6b2ca7c6c3511c42c1e27b147d5328d31653dd7.scope - libcontainer container cfd3434ac4bb22f553c27a5fa6b2ca7c6c3511c42c1e27b147d5328d31653dd7.
Dec 13 01:58:33.194377 containerd[1821]: time="2024-12-13T01:58:33.194344686Z" level=info msg="StartContainer for \"cfd3434ac4bb22f553c27a5fa6b2ca7c6c3511c42c1e27b147d5328d31653dd7\" returns successfully"
Dec 13 01:58:33.389107 kubelet[3241]: I1213 01:58:33.388922    3241 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Dec 13 01:58:33.389107 kubelet[3241]: I1213 01:58:33.389022    3241 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Dec 13 01:58:33.520909 kubelet[3241]: I1213 01:58:33.520836    3241 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mb2fq" podStartSLOduration=21.947843668 podStartE2EDuration="25.520728641s" podCreationTimestamp="2024-12-13 01:58:08 +0000 UTC" firstStartedPulling="2024-12-13 01:58:29.563171293 +0000 UTC m=+44.277350938" lastFinishedPulling="2024-12-13 01:58:33.136056265 +0000 UTC m=+47.850235911" observedRunningTime="2024-12-13 01:58:33.520678871 +0000 UTC m=+48.234858584" watchObservedRunningTime="2024-12-13 01:58:33.520728641 +0000 UTC m=+48.234908345"
Dec 13 01:58:34.646755 kubelet[3241]: I1213 01:58:34.646696    3241 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 01:58:45.346122 containerd[1821]: time="2024-12-13T01:58:45.345921923Z" level=info msg="StopPodSandbox for \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\""
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.420 [WARNING][6319] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86", Pod:"csi-node-driver-mb2fq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3472119922e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.420 [INFO][6319] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.420 [INFO][6319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" iface="eth0" netns=""
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.421 [INFO][6319] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.421 [INFO][6319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.439 [INFO][6334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.439 [INFO][6334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.439 [INFO][6334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.445 [WARNING][6334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.445 [INFO][6334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.446 [INFO][6334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.448304 containerd[1821]: 2024-12-13 01:58:45.447 [INFO][6319] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.448906 containerd[1821]: time="2024-12-13T01:58:45.448343764Z" level=info msg="TearDown network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\" successfully"
Dec 13 01:58:45.448906 containerd[1821]: time="2024-12-13T01:58:45.448374848Z" level=info msg="StopPodSandbox for \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\" returns successfully"
Dec 13 01:58:45.448987 containerd[1821]: time="2024-12-13T01:58:45.448940083Z" level=info msg="RemovePodSandbox for \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\""
Dec 13 01:58:45.448987 containerd[1821]: time="2024-12-13T01:58:45.448973235Z" level=info msg="Forcibly stopping sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\""
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.480 [WARNING][6363] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3ce2b11-2f1c-4ce7-9b72-15c8cfc358a1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"10e43334a95090cd02983b0ee8f8f27570a12d86acfdef75cc63ae5c42f1ad86", Pod:"csi-node-driver-mb2fq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3472119922e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.480 [INFO][6363] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.480 [INFO][6363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" iface="eth0" netns=""
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.480 [INFO][6363] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.480 [INFO][6363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.500 [INFO][6379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.500 [INFO][6379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.500 [INFO][6379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.506 [WARNING][6379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.506 [INFO][6379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" HandleID="k8s-pod-network.9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-csi--node--driver--mb2fq-eth0"
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.507 [INFO][6379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.509513 containerd[1821]: 2024-12-13 01:58:45.508 [INFO][6363] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac"
Dec 13 01:58:45.510046 containerd[1821]: time="2024-12-13T01:58:45.509542880Z" level=info msg="TearDown network for sandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\" successfully"
Dec 13 01:58:45.511417 containerd[1821]: time="2024-12-13T01:58:45.511405397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 01:58:45.511454 containerd[1821]: time="2024-12-13T01:58:45.511435346Z" level=info msg="RemovePodSandbox \"9b63058ea75b613282bd614f0b9f615ea7ce046d705d97ad87eea32f33cff1ac\" returns successfully"
Dec 13 01:58:45.511707 containerd[1821]: time="2024-12-13T01:58:45.511695094Z" level=info msg="StopPodSandbox for \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\""
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.529 [WARNING][6408] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2554f98a-0bc4-4c51-ab0a-a4257428bec4", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078", Pod:"calico-apiserver-67488db8c5-872x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali549c1c91566", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.530 [INFO][6408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.530 [INFO][6408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" iface="eth0" netns=""
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.530 [INFO][6408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.530 [INFO][6408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.540 [INFO][6424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.540 [INFO][6424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.540 [INFO][6424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.544 [WARNING][6424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.544 [INFO][6424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.545 [INFO][6424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.546974 containerd[1821]: 2024-12-13 01:58:45.546 [INFO][6408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.546974 containerd[1821]: time="2024-12-13T01:58:45.546957929Z" level=info msg="TearDown network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\" successfully"
Dec 13 01:58:45.546974 containerd[1821]: time="2024-12-13T01:58:45.546974924Z" level=info msg="StopPodSandbox for \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\" returns successfully"
Dec 13 01:58:45.547300 containerd[1821]: time="2024-12-13T01:58:45.547260761Z" level=info msg="RemovePodSandbox for \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\""
Dec 13 01:58:45.547300 containerd[1821]: time="2024-12-13T01:58:45.547281736Z" level=info msg="Forcibly stopping sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\""
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.566 [WARNING][6449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2554f98a-0bc4-4c51-ab0a-a4257428bec4", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"72e7285757cca3ff0793b6a797358d03f4aeec566ce839930bdecb28c8979078", Pod:"calico-apiserver-67488db8c5-872x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali549c1c91566", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.566 [INFO][6449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.566 [INFO][6449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" iface="eth0" netns=""
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.566 [INFO][6449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.566 [INFO][6449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.578 [INFO][6461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.578 [INFO][6461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.579 [INFO][6461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.582 [WARNING][6461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.582 [INFO][6461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" HandleID="k8s-pod-network.265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--872x9-eth0"
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.583 [INFO][6461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.584216 containerd[1821]: 2024-12-13 01:58:45.583 [INFO][6449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db"
Dec 13 01:58:45.584695 containerd[1821]: time="2024-12-13T01:58:45.584249476Z" level=info msg="TearDown network for sandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\" successfully"
Dec 13 01:58:45.585750 containerd[1821]: time="2024-12-13T01:58:45.585735837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 01:58:45.585795 containerd[1821]: time="2024-12-13T01:58:45.585770301Z" level=info msg="RemovePodSandbox \"265cac541d4c3ee9a6a070b8bce3f2d43af0fd5dd1cad93330e20f24af0839db\" returns successfully"
Dec 13 01:58:45.586069 containerd[1821]: time="2024-12-13T01:58:45.586057298Z" level=info msg="StopPodSandbox for \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\""
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.604 [WARNING][6489] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0", GenerateName:"calico-kube-controllers-6bb7d9fc96-", Namespace:"calico-system", SelfLink:"", UID:"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d9fc96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839", Pod:"calico-kube-controllers-6bb7d9fc96-j42rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bcd2f33a39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.604 [INFO][6489] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.604 [INFO][6489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" iface="eth0" netns=""
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.604 [INFO][6489] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.604 [INFO][6489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.614 [INFO][6504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.614 [INFO][6504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.614 [INFO][6504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.617 [WARNING][6504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.617 [INFO][6504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.618 [INFO][6504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.620143 containerd[1821]: 2024-12-13 01:58:45.619 [INFO][6489] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.620143 containerd[1821]: time="2024-12-13T01:58:45.620096824Z" level=info msg="TearDown network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\" successfully"
Dec 13 01:58:45.620143 containerd[1821]: time="2024-12-13T01:58:45.620112809Z" level=info msg="StopPodSandbox for \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\" returns successfully"
Dec 13 01:58:45.620471 containerd[1821]: time="2024-12-13T01:58:45.620372022Z" level=info msg="RemovePodSandbox for \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\""
Dec 13 01:58:45.620471 containerd[1821]: time="2024-12-13T01:58:45.620386872Z" level=info msg="Forcibly stopping sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\""
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.638 [WARNING][6532] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0", GenerateName:"calico-kube-controllers-6bb7d9fc96-", Namespace:"calico-system", SelfLink:"", UID:"7fad96fe-4b31-4e48-ab5d-5f0fa08ebbe7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb7d9fc96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"3ce638fe3a112e2edf3f3c7e3f89e13f6adce4604b2ff9abff0944b9eef1c839", Pod:"calico-kube-controllers-6bb7d9fc96-j42rn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6bcd2f33a39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.638 [INFO][6532] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.638 [INFO][6532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" iface="eth0" netns=""
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.638 [INFO][6532] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.638 [INFO][6532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.649 [INFO][6545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.649 [INFO][6545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.649 [INFO][6545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.653 [WARNING][6545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.653 [INFO][6545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" HandleID="k8s-pod-network.58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--kube--controllers--6bb7d9fc96--j42rn-eth0"
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.654 [INFO][6545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.655525 containerd[1821]: 2024-12-13 01:58:45.654 [INFO][6532] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6"
Dec 13 01:58:45.655821 containerd[1821]: time="2024-12-13T01:58:45.655547116Z" level=info msg="TearDown network for sandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\" successfully"
Dec 13 01:58:45.656837 containerd[1821]: time="2024-12-13T01:58:45.656796564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 01:58:45.656837 containerd[1821]: time="2024-12-13T01:58:45.656824014Z" level=info msg="RemovePodSandbox \"58d0efaa9f975586546b8c1999589c8c48350b768ec0a08dfd171e8a326397a6\" returns successfully"
Dec 13 01:58:45.657127 containerd[1821]: time="2024-12-13T01:58:45.657078926Z" level=info msg="StopPodSandbox for \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\""
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.675 [WARNING][6571] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a97e1d12-a7fb-4125-b6e6-7d31835664c3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a", Pod:"calico-apiserver-67488db8c5-fjmw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali535a22f9db8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.675 [INFO][6571] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.675 [INFO][6571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" iface="eth0" netns=""
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.675 [INFO][6571] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.675 [INFO][6571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.686 [INFO][6584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.686 [INFO][6584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.686 [INFO][6584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.689 [WARNING][6584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.689 [INFO][6584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.690 [INFO][6584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.691843 containerd[1821]: 2024-12-13 01:58:45.691 [INFO][6571] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.692129 containerd[1821]: time="2024-12-13T01:58:45.691870364Z" level=info msg="TearDown network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\" successfully"
Dec 13 01:58:45.692129 containerd[1821]: time="2024-12-13T01:58:45.691887917Z" level=info msg="StopPodSandbox for \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\" returns successfully"
Dec 13 01:58:45.692162 containerd[1821]: time="2024-12-13T01:58:45.692128503Z" level=info msg="RemovePodSandbox for \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\""
Dec 13 01:58:45.692162 containerd[1821]: time="2024-12-13T01:58:45.692144443Z" level=info msg="Forcibly stopping sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\""
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.710 [WARNING][6610] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0", GenerateName:"calico-apiserver-67488db8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a97e1d12-a7fb-4125-b6e6-7d31835664c3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 7, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67488db8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"6cb96debda6362194da1f09f8d1cab9335417e6313d36d3c293da943688e4a3a", Pod:"calico-apiserver-67488db8c5-fjmw4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali535a22f9db8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.710 [INFO][6610] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.710 [INFO][6610] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" iface="eth0" netns=""
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.710 [INFO][6610] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.710 [INFO][6610] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.720 [INFO][6623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.720 [INFO][6623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.720 [INFO][6623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.724 [WARNING][6623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.724 [INFO][6623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" HandleID="k8s-pod-network.d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-calico--apiserver--67488db8c5--fjmw4-eth0"
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.725 [INFO][6623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.726557 containerd[1821]: 2024-12-13 01:58:45.725 [INFO][6610] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128"
Dec 13 01:58:45.726557 containerd[1821]: time="2024-12-13T01:58:45.726550846Z" level=info msg="TearDown network for sandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\" successfully"
Dec 13 01:58:45.727945 containerd[1821]: time="2024-12-13T01:58:45.727904004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 01:58:45.727945 containerd[1821]: time="2024-12-13T01:58:45.727932499Z" level=info msg="RemovePodSandbox \"d99556010bbe8576b05e6b28d6090b83f283d0c21905f99aa385549cb2639128\" returns successfully"
Dec 13 01:58:45.728247 containerd[1821]: time="2024-12-13T01:58:45.728209448Z" level=info msg="StopPodSandbox for \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\""
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.749 [WARNING][6650] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec", Pod:"coredns-76f75df574-z6dmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali163526916b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.749 [INFO][6650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.749 [INFO][6650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" iface="eth0" netns=""
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.749 [INFO][6650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.749 [INFO][6650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.761 [INFO][6662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.761 [INFO][6662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.761 [INFO][6662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.765 [WARNING][6662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.765 [INFO][6662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.766 [INFO][6662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.767966 containerd[1821]: 2024-12-13 01:58:45.767 [INFO][6650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.768317 containerd[1821]: time="2024-12-13T01:58:45.768004029Z" level=info msg="TearDown network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\" successfully"
Dec 13 01:58:45.768317 containerd[1821]: time="2024-12-13T01:58:45.768030974Z" level=info msg="StopPodSandbox for \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\" returns successfully"
Dec 13 01:58:45.768360 containerd[1821]: time="2024-12-13T01:58:45.768338097Z" level=info msg="RemovePodSandbox for \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\""
Dec 13 01:58:45.768381 containerd[1821]: time="2024-12-13T01:58:45.768363134Z" level=info msg="Forcibly stopping sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\""
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.791 [WARNING][6687] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ed7e9e0-19d8-475e-a8ae-40451cd7fa24", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"1c44ae2cfa2bee380994f8e60dc33ac45e24f957d52e5e1480c9bbab2ea4adec", Pod:"coredns-76f75df574-z6dmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali163526916b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.791 [INFO][6687] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.791 [INFO][6687] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" iface="eth0" netns=""
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.791 [INFO][6687] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.791 [INFO][6687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.804 [INFO][6700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.805 [INFO][6700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.805 [INFO][6700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.809 [WARNING][6700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.809 [INFO][6700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" HandleID="k8s-pod-network.a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--z6dmq-eth0"
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.810 [INFO][6700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.812063 containerd[1821]: 2024-12-13 01:58:45.811 [INFO][6687] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610"
Dec 13 01:58:45.812063 containerd[1821]: time="2024-12-13T01:58:45.812047047Z" level=info msg="TearDown network for sandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\" successfully"
Dec 13 01:58:45.813528 containerd[1821]: time="2024-12-13T01:58:45.813485219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 01:58:45.813528 containerd[1821]: time="2024-12-13T01:58:45.813513312Z" level=info msg="RemovePodSandbox \"a1fd8dc590c360a9406a43ee70be13a35247df82035fea90bc036e2d5d598610\" returns successfully"
Dec 13 01:58:45.813805 containerd[1821]: time="2024-12-13T01:58:45.813765706Z" level=info msg="StopPodSandbox for \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\""
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.831 [WARNING][6728] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944", Pod:"coredns-76f75df574-xqxt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e2b6cc748", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.831 [INFO][6728] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.831 [INFO][6728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" iface="eth0" netns=""
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.831 [INFO][6728] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.831 [INFO][6728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.841 [INFO][6741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.841 [INFO][6741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.841 [INFO][6741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.845 [WARNING][6741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.845 [INFO][6741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.846 [INFO][6741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.847800 containerd[1821]: 2024-12-13 01:58:45.847 [INFO][6728] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.848099 containerd[1821]: time="2024-12-13T01:58:45.847821760Z" level=info msg="TearDown network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\" successfully"
Dec 13 01:58:45.848099 containerd[1821]: time="2024-12-13T01:58:45.847838550Z" level=info msg="StopPodSandbox for \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\" returns successfully"
Dec 13 01:58:45.848099 containerd[1821]: time="2024-12-13T01:58:45.848081452Z" level=info msg="RemovePodSandbox for \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\""
Dec 13 01:58:45.848149 containerd[1821]: time="2024-12-13T01:58:45.848098938Z" level=info msg="Forcibly stopping sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\""
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.866 [WARNING][6770] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d92b1d5f-d865-4f0e-9a3d-e2c1434149e2", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-5a9deb00aa", ContainerID:"2fe7881684efe8cbd224dea1510f3beeb86cfb3273741f6d2ce2046efee60944", Pod:"coredns-76f75df574-xqxt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17e2b6cc748", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.866 [INFO][6770] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.866 [INFO][6770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" iface="eth0" netns=""
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.866 [INFO][6770] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.866 [INFO][6770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.876 [INFO][6783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.876 [INFO][6783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.876 [INFO][6783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.880 [WARNING][6783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.880 [INFO][6783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" HandleID="k8s-pod-network.cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662" Workload="ci--4081.2.1--a--5a9deb00aa-k8s-coredns--76f75df574--xqxt6-eth0"
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.881 [INFO][6783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Dec 13 01:58:45.882671 containerd[1821]: 2024-12-13 01:58:45.881 [INFO][6770] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662"
Dec 13 01:58:45.882671 containerd[1821]: time="2024-12-13T01:58:45.882636413Z" level=info msg="TearDown network for sandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\" successfully"
Dec 13 01:58:45.883925 containerd[1821]: time="2024-12-13T01:58:45.883884393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Dec 13 01:58:45.883925 containerd[1821]: time="2024-12-13T01:58:45.883912008Z" level=info msg="RemovePodSandbox \"cc6174eb8c5a28f176b9f563c213136d7b73e06eef87971fc64b12446d234662\" returns successfully"
Dec 13 01:58:47.168696 sshd[2860]: Connection reset by 218.92.0.204 port 26580 [preauth]
Dec 13 01:58:47.170963 systemd[1]: sshd@9-147.28.180.91:22-218.92.0.204:26580.service: Deactivated successfully.
Dec 13 01:58:51.636516 kubelet[3241]: I1213 01:58:51.636434    3241 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 13 02:00:07.188955 systemd[1]: Started sshd@10-147.28.180.91:22-45.148.10.240:41024.service - OpenSSH per-connection server daemon (45.148.10.240:41024).
Dec 13 02:00:07.332996 sshd[6981]: Connection closed by 45.148.10.240 port 41024
Dec 13 02:00:07.334680 systemd[1]: sshd@10-147.28.180.91:22-45.148.10.240:41024.service: Deactivated successfully.
Dec 13 02:00:16.083366 update_engine[1808]: I20241213 02:00:16.083140  1808 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs
Dec 13 02:00:16.083366 update_engine[1808]: I20241213 02:00:16.083240  1808 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs
Dec 13 02:00:16.084429 update_engine[1808]: I20241213 02:00:16.083644  1808 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs
Dec 13 02:00:16.084740 update_engine[1808]: I20241213 02:00:16.084641  1808 omaha_request_params.cc:62] Current group set to stable
Dec 13 02:00:16.084937 update_engine[1808]: I20241213 02:00:16.084885  1808 update_attempter.cc:499] Already updated boot flags. Skipping.
Dec 13 02:00:16.084937 update_engine[1808]: I20241213 02:00:16.084916  1808 update_attempter.cc:643] Scheduling an action processor start.
Dec 13 02:00:16.085121 update_engine[1808]: I20241213 02:00:16.084954  1808 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Dec 13 02:00:16.085121 update_engine[1808]: I20241213 02:00:16.085024  1808 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs
Dec 13 02:00:16.085281 update_engine[1808]: I20241213 02:00:16.085171  1808 omaha_request_action.cc:271] Posting an Omaha request to disabled
Dec 13 02:00:16.085281 update_engine[1808]: I20241213 02:00:16.085201  1808 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?>
Dec 13 02:00:16.085281 update_engine[1808]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Dec 13 02:00:16.085281 update_engine[1808]:     <os version="Chateau" platform="CoreOS" sp="4081.2.1_x86_64"></os>
Dec 13 02:00:16.085281 update_engine[1808]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.2.1" track="stable" bootid="{03212a14-e8d2-4fd9-bd24-bdca2a50ff38}" oem="packet" oemversion="0.2.2-r2" alephversion="4081.2.1" machineid="37457d91e9984c408370d165a1051cb6" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" >
Dec 13 02:00:16.085281 update_engine[1808]:         <ping active="1"></ping>
Dec 13 02:00:16.085281 update_engine[1808]:         <updatecheck></updatecheck>
Dec 13 02:00:16.085281 update_engine[1808]:         <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event>
Dec 13 02:00:16.085281 update_engine[1808]:     </app>
Dec 13 02:00:16.085281 update_engine[1808]: </request>
Dec 13 02:00:16.085281 update_engine[1808]: I20241213 02:00:16.085217  1808 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Dec 13 02:00:16.086202 locksmithd[1856]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0
Dec 13 02:00:16.088399 update_engine[1808]: I20241213 02:00:16.088361  1808 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Dec 13 02:00:16.088557 update_engine[1808]: I20241213 02:00:16.088521  1808 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Dec 13 02:00:16.089380 update_engine[1808]: E20241213 02:00:16.089339  1808 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Dec 13 02:00:16.089380 update_engine[1808]: I20241213 02:00:16.089370  1808 libcurl_http_fetcher.cc:283] No HTTP response, retry 1
Dec 13 02:00:26.031605 update_engine[1808]: I20241213 02:00:26.031435  1808 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Dec 13 02:00:26.032632 update_engine[1808]: I20241213 02:00:26.032058  1808 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Dec 13 02:00:26.032632 update_engine[1808]: I20241213 02:00:26.032562  1808 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Dec 13 02:00:26.033500 update_engine[1808]: E20241213 02:00:26.033383  1808 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Dec 13 02:00:26.033711 update_engine[1808]: I20241213 02:00:26.033526  1808 libcurl_http_fetcher.cc:283] No HTTP response, retry 2
Dec 13 02:00:36.031821 update_engine[1808]: I20241213 02:00:36.031667  1808 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Dec 13 02:00:36.032819 update_engine[1808]: I20241213 02:00:36.032196  1808 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Dec 13 02:00:36.032819 update_engine[1808]: I20241213 02:00:36.032743  1808 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Dec 13 02:00:36.033649 update_engine[1808]: E20241213 02:00:36.033519  1808 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Dec 13 02:00:36.033841 update_engine[1808]: I20241213 02:00:36.033694  1808 libcurl_http_fetcher.cc:283] No HTTP response, retry 3
Dec 13 02:00:46.031022 update_engine[1808]: I20241213 02:00:46.030856  1808 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Dec 13 02:00:46.032110 update_engine[1808]: I20241213 02:00:46.031394  1808 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Dec 13 02:00:46.032110 update_engine[1808]: I20241213 02:00:46.031942  1808 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Dec 13 02:00:46.032837 update_engine[1808]: E20241213 02:00:46.032720  1808 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Dec 13 02:00:46.033056 update_engine[1808]: I20241213 02:00:46.032860  1808 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Dec 13 02:00:46.033056 update_engine[1808]: I20241213 02:00:46.032891  1808 omaha_request_action.cc:617] Omaha request response:
Dec 13 02:00:46.033273 update_engine[1808]: E20241213 02:00:46.033051  1808 omaha_request_action.cc:636] Omaha request network transfer failed.
Dec 13 02:00:46.033273 update_engine[1808]: I20241213 02:00:46.033102  1808 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing.
Dec 13 02:00:46.033273 update_engine[1808]: I20241213 02:00:46.033120  1808 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Dec 13 02:00:46.033273 update_engine[1808]: I20241213 02:00:46.033135  1808 update_attempter.cc:306] Processing Done.
Dec 13 02:00:46.033273 update_engine[1808]: E20241213 02:00:46.033165  1808 update_attempter.cc:619] Update failed.
Dec 13 02:00:46.033273 update_engine[1808]: I20241213 02:00:46.033181  1808 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse
Dec 13 02:00:46.033273 update_engine[1808]: I20241213 02:00:46.033196  1808 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse)
Dec 13 02:00:46.033273 update_engine[1808]: I20241213 02:00:46.033213  1808 payload_state.cc:103] Ignoring failures until we get a valid Omaha response.
Dec 13 02:00:46.034079 update_engine[1808]: I20241213 02:00:46.033367  1808 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Dec 13 02:00:46.034079 update_engine[1808]: I20241213 02:00:46.033429  1808 omaha_request_action.cc:271] Posting an Omaha request to disabled
Dec 13 02:00:46.034079 update_engine[1808]: I20241213 02:00:46.033448  1808 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?>
Dec 13 02:00:46.034079 update_engine[1808]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Dec 13 02:00:46.034079 update_engine[1808]:     <os version="Chateau" platform="CoreOS" sp="4081.2.1_x86_64"></os>
Dec 13 02:00:46.034079 update_engine[1808]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.2.1" track="stable" bootid="{03212a14-e8d2-4fd9-bd24-bdca2a50ff38}" oem="packet" oemversion="0.2.2-r2" alephversion="4081.2.1" machineid="37457d91e9984c408370d165a1051cb6" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" >
Dec 13 02:00:46.034079 update_engine[1808]:         <event eventtype="3" eventresult="0" errorcode="268437456"></event>
Dec 13 02:00:46.034079 update_engine[1808]:     </app>
Dec 13 02:00:46.034079 update_engine[1808]: </request>
Dec 13 02:00:46.034079 update_engine[1808]: I20241213 02:00:46.033464  1808 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Dec 13 02:00:46.034079 update_engine[1808]: I20241213 02:00:46.033886  1808 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Dec 13 02:00:46.035169 locksmithd[1856]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.034286  1808 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Dec 13 02:00:46.035924 update_engine[1808]: E20241213 02:00:46.034849  1808 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.034960  1808 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.034982  1808 omaha_request_action.cc:617] Omaha request response:
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.035001  1808 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.035015  1808 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.035030  1808 update_attempter.cc:306] Processing Done.
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.035046  1808 update_attempter.cc:310] Error event sent.
Dec 13 02:00:46.035924 update_engine[1808]: I20241213 02:00:46.035071  1808 update_check_scheduler.cc:74] Next update check in 46m26s
Dec 13 02:00:46.036780 locksmithd[1856]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0
Dec 13 02:02:57.580485 systemd[1]: Started sshd@11-147.28.180.91:22-218.92.0.222:50208.service - OpenSSH per-connection server daemon (218.92.0.222:50208).
Dec 13 02:03:03.285838 sshd[7369]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:05.489008 sshd[7362]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:06.712485 sshd[7391]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:09.070971 sshd[7362]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:10.266430 sshd[7393]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:12.703964 sshd[7362]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:13.084993 systemd[1]: Started sshd@12-147.28.180.91:22-218.92.0.222:54038.service - OpenSSH per-connection server daemon (218.92.0.222:54038).
Dec 13 02:03:16.881923 sshd[7362]: Connection reset by authenticating user root 218.92.0.222 port 50208 [preauth]
Dec 13 02:03:16.885412 systemd[1]: sshd@11-147.28.180.91:22-218.92.0.222:50208.service: Deactivated successfully.
Dec 13 02:03:17.779991 sshd[7423]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:19.195279 systemd[1]: Started sshd@13-147.28.180.91:22-218.92.0.232:18258.service - OpenSSH per-connection server daemon (218.92.0.232:18258).
Dec 13 02:03:19.511427 sshd[7421]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:20.297439 sshd[7429]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:22.243944 sshd[7421]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:24.582810 sshd[7430]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:03:26.745742 sshd[7427]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:03:27.518438 sshd[7431]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:03:29.424964 sshd[7427]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:03:29.715484 sshd[7432]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:31.898685 sshd[7421]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:32.077501 sshd[7421]: Received disconnect from 218.92.0.222 port 54038:11:  [preauth]
Dec 13 02:03:32.077501 sshd[7421]: Disconnected from authenticating user root 218.92.0.222 port 54038 [preauth]
Dec 13 02:03:32.081235 systemd[1]: sshd@12-147.28.180.91:22-218.92.0.222:54038.service: Deactivated successfully.
Dec 13 02:03:33.350899 systemd[1]: Started sshd@14-147.28.180.91:22-218.92.0.222:58932.service - OpenSSH per-connection server daemon (218.92.0.222:58932).
Dec 13 02:03:36.750001 sshd[7460]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:03:38.892999 sshd[7427]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:03:39.461049 sshd[7427]: Received disconnect from 218.92.0.232 port 18258:11:  [preauth]
Dec 13 02:03:39.461049 sshd[7427]: Disconnected from authenticating user root 218.92.0.232 port 18258 [preauth]
Dec 13 02:03:39.464467 systemd[1]: sshd@13-147.28.180.91:22-218.92.0.232:18258.service: Deactivated successfully.
Dec 13 02:03:40.256187 systemd[1]: Started sshd@15-147.28.180.91:22-218.92.0.232:28232.service - OpenSSH per-connection server daemon (218.92.0.232:28232).
Dec 13 02:03:42.859466 systemd[1]: Started sshd@16-147.28.180.91:22-147.75.109.163:36710.service - OpenSSH per-connection server daemon (147.75.109.163:36710).
Dec 13 02:03:42.893844 sshd[7524]: Accepted publickey for core from 147.75.109.163 port 36710 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:42.894490 sshd[7524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:42.897132 systemd-logind[1803]: New session 12 of user core.
Dec 13 02:03:42.919015 systemd[1]: Started session-12.scope - Session 12 of User core.
Dec 13 02:03:43.037487 sshd[7524]: pam_unix(sshd:session): session closed for user core
Dec 13 02:03:43.039320 systemd[1]: sshd@16-147.28.180.91:22-147.75.109.163:36710.service: Deactivated successfully.
Dec 13 02:03:43.040349 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 02:03:43.041186 systemd-logind[1803]: Session 12 logged out. Waiting for processes to exit.
Dec 13 02:03:43.041809 systemd-logind[1803]: Removed session 12.
Dec 13 02:03:44.397204 sshd[7522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:03:45.089586 sshd[7554]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:47.031831 sshd[7498]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:03:47.332783 sshd[7438]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:47.756893 sshd[7557]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:03:48.074946 systemd[1]: Started sshd@17-147.28.180.91:22-147.75.109.163:52326.service - OpenSSH per-connection server daemon (147.75.109.163:52326).
Dec 13 02:03:48.103062 sshd[7560]: Accepted publickey for core from 147.75.109.163 port 52326 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:48.103809 sshd[7560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:48.106683 systemd-logind[1803]: New session 13 of user core.
Dec 13 02:03:48.121893 systemd[1]: Started session-13.scope - Session 13 of User core.
Dec 13 02:03:48.173512 sshd[7558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.222  user=root
Dec 13 02:03:48.210260 sshd[7560]: pam_unix(sshd:session): session closed for user core
Dec 13 02:03:48.211818 systemd[1]: sshd@17-147.28.180.91:22-147.75.109.163:52326.service: Deactivated successfully.
Dec 13 02:03:48.212750 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 02:03:48.213457 systemd-logind[1803]: Session 13 logged out. Waiting for processes to exit.
Dec 13 02:03:48.214097 systemd-logind[1803]: Removed session 13.
Dec 13 02:03:49.743970 sshd[7498]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:03:50.159937 sshd[7438]: PAM: Permission denied for root from 218.92.0.222
Dec 13 02:03:53.235876 systemd[1]: Started sshd@18-147.28.180.91:22-147.75.109.163:52336.service - OpenSSH per-connection server daemon (147.75.109.163:52336).
Dec 13 02:03:53.259908 sshd[7590]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:03:53.270858 sshd[7595]: Accepted publickey for core from 147.75.109.163 port 52336 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:53.271582 sshd[7595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:53.274050 systemd-logind[1803]: New session 14 of user core.
Dec 13 02:03:53.290069 systemd[1]: Started session-14.scope - Session 14 of User core.
Dec 13 02:03:53.385686 sshd[7595]: pam_unix(sshd:session): session closed for user core
Dec 13 02:03:53.398461 systemd[1]: sshd@18-147.28.180.91:22-147.75.109.163:52336.service: Deactivated successfully.
Dec 13 02:03:53.399277 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 02:03:53.400044 systemd-logind[1803]: Session 14 logged out. Waiting for processes to exit.
Dec 13 02:03:53.400713 systemd[1]: Started sshd@19-147.28.180.91:22-147.75.109.163:52352.service - OpenSSH per-connection server daemon (147.75.109.163:52352).
Dec 13 02:03:53.401234 systemd-logind[1803]: Removed session 14.
Dec 13 02:03:53.434030 sshd[7623]: Accepted publickey for core from 147.75.109.163 port 52352 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:53.434907 sshd[7623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:53.438009 systemd-logind[1803]: New session 15 of user core.
Dec 13 02:03:53.446861 systemd[1]: Started session-15.scope - Session 15 of User core.
Dec 13 02:03:53.612670 sshd[7623]: pam_unix(sshd:session): session closed for user core
Dec 13 02:03:53.623862 systemd[1]: sshd@19-147.28.180.91:22-147.75.109.163:52352.service: Deactivated successfully.
Dec 13 02:03:53.624844 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 02:03:53.625574 systemd-logind[1803]: Session 15 logged out. Waiting for processes to exit.
Dec 13 02:03:53.626382 systemd[1]: Started sshd@20-147.28.180.91:22-147.75.109.163:52368.service - OpenSSH per-connection server daemon (147.75.109.163:52368).
Dec 13 02:03:53.627026 systemd-logind[1803]: Removed session 15.
Dec 13 02:03:53.658091 sshd[7647]: Accepted publickey for core from 147.75.109.163 port 52368 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:53.658869 sshd[7647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:53.661570 systemd-logind[1803]: New session 16 of user core.
Dec 13 02:03:53.671772 systemd[1]: Started session-16.scope - Session 16 of User core.
Dec 13 02:03:53.818662 sshd[7647]: pam_unix(sshd:session): session closed for user core
Dec 13 02:03:53.820513 systemd[1]: sshd@20-147.28.180.91:22-147.75.109.163:52368.service: Deactivated successfully.
Dec 13 02:03:53.821586 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 02:03:53.822415 systemd-logind[1803]: Session 16 logged out. Waiting for processes to exit.
Dec 13 02:03:53.823234 systemd-logind[1803]: Removed session 16.
Dec 13 02:03:54.929950 sshd[7498]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:03:55.093244 sshd[7498]: Received disconnect from 218.92.0.232 port 28232:11:  [preauth]
Dec 13 02:03:55.093244 sshd[7498]: Disconnected from authenticating user root 218.92.0.232 port 28232 [preauth]
Dec 13 02:03:55.096855 systemd[1]: sshd@15-147.28.180.91:22-218.92.0.232:28232.service: Deactivated successfully.
Dec 13 02:03:55.265665 systemd[1]: Started sshd@21-147.28.180.91:22-218.92.0.232:23274.service - OpenSSH per-connection server daemon (218.92.0.232:23274).
Dec 13 02:03:57.778563 sshd[7678]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:03:58.845796 systemd[1]: Started sshd@22-147.28.180.91:22-147.75.109.163:54914.service - OpenSSH per-connection server daemon (147.75.109.163:54914).
Dec 13 02:03:58.873926 sshd[7680]: Accepted publickey for core from 147.75.109.163 port 54914 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:58.874682 sshd[7680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:58.877087 systemd-logind[1803]: New session 17 of user core.
Dec 13 02:03:58.877603 systemd[1]: Started session-17.scope - Session 17 of User core.
Dec 13 02:03:58.966074 sshd[7680]: pam_unix(sshd:session): session closed for user core
Dec 13 02:03:58.988970 systemd[1]: sshd@22-147.28.180.91:22-147.75.109.163:54914.service: Deactivated successfully.
Dec 13 02:03:58.990125 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 02:03:58.991121 systemd-logind[1803]: Session 17 logged out. Waiting for processes to exit.
Dec 13 02:03:58.992069 systemd[1]: Started sshd@23-147.28.180.91:22-147.75.109.163:54920.service - OpenSSH per-connection server daemon (147.75.109.163:54920).
Dec 13 02:03:58.992917 systemd-logind[1803]: Removed session 17.
Dec 13 02:03:59.024758 sshd[7705]: Accepted publickey for core from 147.75.109.163 port 54920 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:59.025466 sshd[7705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:59.028202 systemd-logind[1803]: New session 18 of user core.
Dec 13 02:03:59.036917 systemd[1]: Started session-18.scope - Session 18 of User core.
Dec 13 02:03:59.200766 sshd[7705]: pam_unix(sshd:session): session closed for user core
Dec 13 02:03:59.214008 systemd[1]: sshd@23-147.28.180.91:22-147.75.109.163:54920.service: Deactivated successfully.
Dec 13 02:03:59.215234 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 02:03:59.216227 systemd-logind[1803]: Session 18 logged out. Waiting for processes to exit.
Dec 13 02:03:59.217334 systemd[1]: Started sshd@24-147.28.180.91:22-147.75.109.163:54922.service - OpenSSH per-connection server daemon (147.75.109.163:54922).
Dec 13 02:03:59.218250 systemd-logind[1803]: Removed session 18.
Dec 13 02:03:59.281034 sshd[7731]: Accepted publickey for core from 147.75.109.163 port 54922 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:03:59.282159 sshd[7731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:03:59.286034 systemd-logind[1803]: New session 19 of user core.
Dec 13 02:03:59.294859 systemd[1]: Started session-19.scope - Session 19 of User core.
Dec 13 02:03:59.805093 sshd[7676]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:04:00.128353 sshd[7755]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:04:00.428328 sshd[7731]: pam_unix(sshd:session): session closed for user core
Dec 13 02:04:00.439405 systemd[1]: sshd@24-147.28.180.91:22-147.75.109.163:54922.service: Deactivated successfully.
Dec 13 02:04:00.440322 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 02:04:00.441137 systemd-logind[1803]: Session 19 logged out. Waiting for processes to exit.
Dec 13 02:04:00.441869 systemd[1]: Started sshd@25-147.28.180.91:22-147.75.109.163:54932.service - OpenSSH per-connection server daemon (147.75.109.163:54932).
Dec 13 02:04:00.442372 systemd-logind[1803]: Removed session 19.
Dec 13 02:04:00.474748 sshd[7764]: Accepted publickey for core from 147.75.109.163 port 54932 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:04:00.475560 sshd[7764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:04:00.478567 systemd-logind[1803]: New session 20 of user core.
Dec 13 02:04:00.495892 systemd[1]: Started session-20.scope - Session 20 of User core.
Dec 13 02:04:00.734909 sshd[7764]: pam_unix(sshd:session): session closed for user core
Dec 13 02:04:00.754847 systemd[1]: sshd@25-147.28.180.91:22-147.75.109.163:54932.service: Deactivated successfully.
Dec 13 02:04:00.756857 systemd[1]: session-20.scope: Deactivated successfully.
Dec 13 02:04:00.758647 systemd-logind[1803]: Session 20 logged out. Waiting for processes to exit.
Dec 13 02:04:00.760386 systemd[1]: Started sshd@26-147.28.180.91:22-147.75.109.163:54946.service - OpenSSH per-connection server daemon (147.75.109.163:54946).
Dec 13 02:04:00.761891 systemd-logind[1803]: Removed session 20.
Dec 13 02:04:00.828588 sshd[7791]: Accepted publickey for core from 147.75.109.163 port 54946 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:04:00.831217 sshd[7791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:04:00.839130 systemd-logind[1803]: New session 21 of user core.
Dec 13 02:04:00.852075 systemd[1]: Started session-21.scope - Session 21 of User core.
Dec 13 02:04:00.982581 sshd[7791]: pam_unix(sshd:session): session closed for user core
Dec 13 02:04:00.984256 systemd[1]: sshd@26-147.28.180.91:22-147.75.109.163:54946.service: Deactivated successfully.
Dec 13 02:04:00.985175 systemd[1]: session-21.scope: Deactivated successfully.
Dec 13 02:04:00.985950 systemd-logind[1803]: Session 21 logged out. Waiting for processes to exit.
Dec 13 02:04:00.986493 systemd-logind[1803]: Removed session 21.
Dec 13 02:04:02.095190 sshd[7676]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:04:02.425195 sshd[7818]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.232  user=root
Dec 13 02:04:04.135944 sshd[7676]: PAM: Permission denied for root from 218.92.0.232
Dec 13 02:04:04.300009 sshd[7676]: Received disconnect from 218.92.0.232 port 23274:11:  [preauth]
Dec 13 02:04:04.300009 sshd[7676]: Disconnected from authenticating user root 218.92.0.232 port 23274 [preauth]
Dec 13 02:04:04.303529 systemd[1]: sshd@21-147.28.180.91:22-218.92.0.232:23274.service: Deactivated successfully.
Dec 13 02:04:06.004317 systemd[1]: Started sshd@27-147.28.180.91:22-147.75.109.163:54960.service - OpenSSH per-connection server daemon (147.75.109.163:54960).
Dec 13 02:04:06.034952 sshd[7845]: Accepted publickey for core from 147.75.109.163 port 54960 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:04:06.035818 sshd[7845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:04:06.038640 systemd-logind[1803]: New session 22 of user core.
Dec 13 02:04:06.051861 systemd[1]: Started session-22.scope - Session 22 of User core.
Dec 13 02:04:06.138195 sshd[7845]: pam_unix(sshd:session): session closed for user core
Dec 13 02:04:06.139782 systemd[1]: sshd@27-147.28.180.91:22-147.75.109.163:54960.service: Deactivated successfully.
Dec 13 02:04:06.140718 systemd[1]: session-22.scope: Deactivated successfully.
Dec 13 02:04:06.141462 systemd-logind[1803]: Session 22 logged out. Waiting for processes to exit.
Dec 13 02:04:06.142118 systemd-logind[1803]: Removed session 22.
Dec 13 02:04:11.150598 systemd[1]: Started sshd@28-147.28.180.91:22-147.75.109.163:56090.service - OpenSSH per-connection server daemon (147.75.109.163:56090).
Dec 13 02:04:11.183176 sshd[7897]: Accepted publickey for core from 147.75.109.163 port 56090 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:04:11.183899 sshd[7897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:04:11.186456 systemd-logind[1803]: New session 23 of user core.
Dec 13 02:04:11.204780 systemd[1]: Started session-23.scope - Session 23 of User core.
Dec 13 02:04:11.292981 sshd[7897]: pam_unix(sshd:session): session closed for user core
Dec 13 02:04:11.294713 systemd[1]: sshd@28-147.28.180.91:22-147.75.109.163:56090.service: Deactivated successfully.
Dec 13 02:04:11.295793 systemd[1]: session-23.scope: Deactivated successfully.
Dec 13 02:04:11.296594 systemd-logind[1803]: Session 23 logged out. Waiting for processes to exit.
Dec 13 02:04:11.297302 systemd-logind[1803]: Removed session 23.
Dec 13 02:04:16.314679 systemd[1]: Started sshd@29-147.28.180.91:22-147.75.109.163:52894.service - OpenSSH per-connection server daemon (147.75.109.163:52894).
Dec 13 02:04:16.345430 sshd[7923]: Accepted publickey for core from 147.75.109.163 port 52894 ssh2: RSA SHA256:2oWIz7bHWycO9stGCzOz9TlufWjMlk3Pw44o8T4kFZ0
Dec 13 02:04:16.346145 sshd[7923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Dec 13 02:04:16.348685 systemd-logind[1803]: New session 24 of user core.
Dec 13 02:04:16.359900 systemd[1]: Started session-24.scope - Session 24 of User core.
Dec 13 02:04:16.446327 sshd[7923]: pam_unix(sshd:session): session closed for user core
Dec 13 02:04:16.448049 systemd[1]: sshd@29-147.28.180.91:22-147.75.109.163:52894.service: Deactivated successfully.
Dec 13 02:04:16.449022 systemd[1]: session-24.scope: Deactivated successfully.
Dec 13 02:04:16.449756 systemd-logind[1803]: Session 24 logged out. Waiting for processes to exit.
Dec 13 02:04:16.450267 systemd-logind[1803]: Removed session 24.