Dec 13 03:36:19.563235 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 03:36:19.563248 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:36:19.563255 kernel: BIOS-provided physical RAM map: Dec 13 03:36:19.563259 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 03:36:19.563262 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 03:36:19.563266 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 03:36:19.563271 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 03:36:19.563275 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 03:36:19.563279 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000008266efff] usable Dec 13 03:36:19.563282 kernel: BIOS-e820: [mem 0x000000008266f000-0x000000008266ffff] ACPI NVS Dec 13 03:36:19.563287 kernel: BIOS-e820: [mem 0x0000000082670000-0x0000000082670fff] reserved Dec 13 03:36:19.563291 kernel: BIOS-e820: [mem 0x0000000082671000-0x000000008afccfff] usable Dec 13 03:36:19.563295 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Dec 13 03:36:19.563299 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Dec 13 03:36:19.563304 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Dec 13 03:36:19.563309 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Dec 13 03:36:19.563313 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Dec 13 03:36:19.563317 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Dec 13 03:36:19.563321 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 03:36:19.563326 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 03:36:19.563330 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 03:36:19.563334 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 03:36:19.563338 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 03:36:19.563342 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Dec 13 03:36:19.563347 kernel: NX (Execute Disable) protection: active Dec 13 03:36:19.563354 kernel: SMBIOS 3.2.1 present. Dec 13 03:36:19.563359 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Dec 13 03:36:19.563363 kernel: tsc: Detected 3400.000 MHz processor Dec 13 03:36:19.563367 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 03:36:19.563372 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 03:36:19.563376 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 03:36:19.563381 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Dec 13 03:36:19.563385 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 03:36:19.563390 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Dec 13 03:36:19.563394 kernel: Using GB pages for direct mapping Dec 13 03:36:19.563414 kernel: ACPI: Early table checksum verification disabled Dec 13 03:36:19.563419 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 03:36:19.563423 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 03:36:19.563428 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Dec 13 03:36:19.563432 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 03:36:19.563438 kernel: ACPI: FACS 0x000000008C66CF80 000040 Dec 13 03:36:19.563443 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Dec 13 03:36:19.563448 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Dec 13 03:36:19.563453 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 03:36:19.563457 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 03:36:19.563462 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 03:36:19.563467 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 03:36:19.563471 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 03:36:19.563476 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 03:36:19.563480 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:36:19.563486 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 03:36:19.563491 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 03:36:19.563495 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:36:19.563500 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:36:19.563504 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 03:36:19.563509 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 03:36:19.563514 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:36:19.563518 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:36:19.563523 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 03:36:19.563528 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Dec 13 03:36:19.563533 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 03:36:19.563537 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 03:36:19.563542 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 03:36:19.563546 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Dec 13 03:36:19.563551 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 03:36:19.563555 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 03:36:19.563560 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 03:36:19.563566 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 03:36:19.563570 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 03:36:19.563575 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Dec 13 03:36:19.563580 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Dec 13 03:36:19.563584 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Dec 13 03:36:19.563589 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Dec 13 03:36:19.563593 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Dec 13 03:36:19.563598 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Dec 13 03:36:19.563603 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Dec 13 03:36:19.563608 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Dec 13 03:36:19.563612 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Dec 13 03:36:19.563617 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Dec 13 03:36:19.563622 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Dec 13 03:36:19.563627 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Dec 13 03:36:19.563631 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Dec 13 03:36:19.563636 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Dec 13 03:36:19.563640 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Dec 13 03:36:19.563645 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Dec 13 03:36:19.563650 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Dec 13 03:36:19.563654 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Dec 13 03:36:19.563659 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Dec 13 03:36:19.563664 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Dec 13 03:36:19.563668 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Dec 13 03:36:19.563673 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Dec 13 03:36:19.563677 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Dec 13 03:36:19.563682 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Dec 13 03:36:19.563687 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Dec 13 03:36:19.563692 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Dec 13 03:36:19.563696 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Dec 13 03:36:19.563701 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Dec 13 03:36:19.563705 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Dec 13 03:36:19.563710 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Dec 13 03:36:19.563714 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Dec 13 03:36:19.563719 kernel: No NUMA configuration found Dec 13 03:36:19.563724 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Dec 13 03:36:19.563729 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Dec 13 03:36:19.563734 kernel: Zone ranges: Dec 13 03:36:19.563738 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 03:36:19.563743 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 03:36:19.563747 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Dec 13 03:36:19.563752 kernel: Movable zone start for each node Dec 13 03:36:19.563756 kernel: Early memory node ranges Dec 13 03:36:19.563761 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 03:36:19.563766 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 03:36:19.563770 kernel: node 0: [mem 0x0000000040400000-0x000000008266efff] Dec 13 03:36:19.563775 kernel: node 0: [mem 0x0000000082671000-0x000000008afccfff] Dec 13 03:36:19.563780 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Dec 13 03:36:19.563785 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Dec 13 03:36:19.563789 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Dec 13 03:36:19.563794 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Dec 13 03:36:19.563798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 03:36:19.563806 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 03:36:19.563812 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 03:36:19.563817 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 03:36:19.563822 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Dec 13 03:36:19.563827 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Dec 13 03:36:19.563832 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Dec 13 03:36:19.563837 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Dec 13 03:36:19.563842 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 03:36:19.563847 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 03:36:19.563852 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 03:36:19.563857 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 03:36:19.563862 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 03:36:19.563867 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 03:36:19.563872 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 03:36:19.563877 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 03:36:19.563882 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 03:36:19.563886 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 03:36:19.563891 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 03:36:19.563896 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 03:36:19.563901 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 03:36:19.563907 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 03:36:19.563911 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 03:36:19.563916 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 03:36:19.563921 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 03:36:19.563926 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 03:36:19.563931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 03:36:19.563936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 03:36:19.563941 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 03:36:19.563946 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 03:36:19.563951 kernel: TSC deadline timer available Dec 13 03:36:19.563956 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 03:36:19.563961 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Dec 13 03:36:19.563966 kernel: Booting paravirtualized kernel on bare hardware Dec 13 03:36:19.563971 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 03:36:19.563976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 03:36:19.563981 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 03:36:19.563986 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 03:36:19.563990 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 03:36:19.563996 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Dec 13 03:36:19.564001 kernel: Policy zone: Normal Dec 13 03:36:19.564006 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:36:19.564012 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 03:36:19.564016 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 03:36:19.564021 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 03:36:19.564026 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 03:36:19.564032 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 730116K reserved, 0K cma-reserved) Dec 13 03:36:19.564037 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 03:36:19.564042 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 03:36:19.564047 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 03:36:19.564052 kernel: rcu: Hierarchical RCU implementation. Dec 13 03:36:19.564057 kernel: rcu: RCU event tracing is enabled. Dec 13 03:36:19.564062 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 03:36:19.564067 kernel: Rude variant of Tasks RCU enabled. Dec 13 03:36:19.564072 kernel: Tracing variant of Tasks RCU enabled. Dec 13 03:36:19.564078 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 03:36:19.564083 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 03:36:19.564088 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 03:36:19.564092 kernel: random: crng init done Dec 13 03:36:19.564097 kernel: Console: colour dummy device 80x25 Dec 13 03:36:19.564102 kernel: printk: console [tty0] enabled Dec 13 03:36:19.564107 kernel: printk: console [ttyS1] enabled Dec 13 03:36:19.564112 kernel: ACPI: Core revision 20210730 Dec 13 03:36:19.564117 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Dec 13 03:36:19.564122 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 03:36:19.564127 kernel: DMAR: Host address width 39 Dec 13 03:36:19.564132 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 03:36:19.564137 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 03:36:19.564142 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Dec 13 03:36:19.564147 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Dec 13 03:36:19.564152 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 03:36:19.564157 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 03:36:19.564162 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 03:36:19.564167 kernel: x2apic enabled Dec 13 03:36:19.564172 kernel: Switched APIC routing to cluster x2apic. Dec 13 03:36:19.564177 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 03:36:19.564182 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 03:36:19.564187 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 03:36:19.564192 kernel: process: using mwait in idle threads Dec 13 03:36:19.564197 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 03:36:19.564202 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 03:36:19.564207 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 03:36:19.564211 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:36:19.564217 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 03:36:19.564222 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 03:36:19.564227 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 03:36:19.564232 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 03:36:19.564236 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 03:36:19.564241 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 03:36:19.564246 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 03:36:19.564251 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 03:36:19.564256 kernel: TAA: Mitigation: TSX disabled Dec 13 03:36:19.564261 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 03:36:19.564265 kernel: SRBDS: Mitigation: Microcode Dec 13 03:36:19.564271 kernel: GDS: Vulnerable: No microcode Dec 13 03:36:19.564276 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 03:36:19.564281 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 03:36:19.564286 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 03:36:19.564290 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 03:36:19.564295 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 03:36:19.564300 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 03:36:19.564305 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 03:36:19.564310 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 03:36:19.564314 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 03:36:19.564319 kernel: Freeing SMP alternatives memory: 32K Dec 13 03:36:19.564325 kernel: pid_max: default: 32768 minimum: 301 Dec 13 03:36:19.564329 kernel: LSM: Security Framework initializing Dec 13 03:36:19.564334 kernel: SELinux: Initializing. Dec 13 03:36:19.564339 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 03:36:19.564344 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 03:36:19.564349 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 03:36:19.564375 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 03:36:19.564380 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 03:36:19.564385 kernel: ... version: 4 Dec 13 03:36:19.564390 kernel: ... bit width: 48 Dec 13 03:36:19.564395 kernel: ... generic registers: 4 Dec 13 03:36:19.564401 kernel: ... value mask: 0000ffffffffffff Dec 13 03:36:19.564406 kernel: ... max period: 00007fffffffffff Dec 13 03:36:19.564411 kernel: ... fixed-purpose events: 3 Dec 13 03:36:19.564416 kernel: ... event mask: 000000070000000f Dec 13 03:36:19.564421 kernel: signal: max sigframe size: 2032 Dec 13 03:36:19.564426 kernel: rcu: Hierarchical SRCU implementation. Dec 13 03:36:19.564431 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 03:36:19.564436 kernel: smp: Bringing up secondary CPUs ... Dec 13 03:36:19.564441 kernel: x86: Booting SMP configuration: Dec 13 03:36:19.564446 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 03:36:19.564452 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 03:36:19.564457 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 03:36:19.564462 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 03:36:19.564467 kernel: smpboot: Max logical packages: 1 Dec 13 03:36:19.564472 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 03:36:19.564477 kernel: devtmpfs: initialized Dec 13 03:36:19.564481 kernel: x86/mm: Memory block size: 128MB Dec 13 03:36:19.564487 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8266f000-0x8266ffff] (4096 bytes) Dec 13 03:36:19.564492 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Dec 13 03:36:19.564497 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 03:36:19.564502 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 03:36:19.564507 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 03:36:19.564512 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 03:36:19.564517 kernel: audit: initializing netlink subsys (disabled) Dec 13 03:36:19.564522 kernel: audit: type=2000 audit(1734060974.041:1): state=initialized audit_enabled=0 res=1 Dec 13 03:36:19.564527 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 03:36:19.564532 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 03:36:19.564538 kernel: cpuidle: using governor menu Dec 13 03:36:19.564543 kernel: ACPI: bus type PCI registered Dec 13 03:36:19.564548 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 03:36:19.564553 kernel: dca service started, version 1.12.1 Dec 13 03:36:19.564558 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 03:36:19.564563 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 03:36:19.564568 kernel: PCI: Using configuration type 1 for base access Dec 13 03:36:19.564573 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 03:36:19.564578 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 03:36:19.564583 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 03:36:19.564588 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 03:36:19.564593 kernel: ACPI: Added _OSI(Module Device) Dec 13 03:36:19.564598 kernel: ACPI: Added _OSI(Processor Device) Dec 13 03:36:19.564603 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 03:36:19.564608 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 03:36:19.564613 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 03:36:19.564618 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 03:36:19.564623 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 03:36:19.564629 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 03:36:19.564634 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:36:19.564639 kernel: ACPI: SSDT 0xFFFF9C3E00219900 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 03:36:19.564644 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 03:36:19.564649 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:36:19.564654 kernel: ACPI: SSDT 0xFFFF9C3E01AE5000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 03:36:19.564659 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:36:19.564664 kernel: ACPI: SSDT 0xFFFF9C3E01A5B000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 03:36:19.564669 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:36:19.564674 kernel: ACPI: SSDT 0xFFFF9C3E01B4D800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 03:36:19.564679 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:36:19.564684 kernel: ACPI: SSDT 0xFFFF9C3E0014B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 03:36:19.564689 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:36:19.564694 kernel: ACPI: SSDT 0xFFFF9C3E01AE1400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 03:36:19.564699 kernel: ACPI: Interpreter enabled Dec 13 03:36:19.564704 kernel: ACPI: PM: (supports S0 S5) Dec 13 03:36:19.564709 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 03:36:19.564714 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 03:36:19.564720 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 03:36:19.564725 kernel: HEST: Table parsing has been initialized. Dec 13 03:36:19.564730 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 03:36:19.564735 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 03:36:19.564740 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 03:36:19.564745 kernel: ACPI: PM: Power Resource [USBC] Dec 13 03:36:19.564750 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 03:36:19.564755 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 03:36:19.564760 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 03:36:19.564764 kernel: ACPI: PM: Power Resource [WRST] Dec 13 03:36:19.564770 kernel: ACPI: PM: Power Resource [FN00] Dec 13 03:36:19.564775 kernel: ACPI: PM: Power Resource [FN01] Dec 13 03:36:19.564780 kernel: ACPI: PM: Power Resource [FN02] Dec 13 03:36:19.564785 kernel: ACPI: PM: Power Resource [FN03] Dec 13 03:36:19.564790 kernel: ACPI: PM: Power Resource [FN04] Dec 13 03:36:19.564795 kernel: ACPI: PM: Power Resource [PIN] Dec 13 03:36:19.564800 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 03:36:19.564864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 03:36:19.564915 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 03:36:19.564958 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 03:36:19.564965 kernel: PCI host bridge to bus 0000:00 Dec 13 03:36:19.565010 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 03:36:19.565049 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 03:36:19.565087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 03:36:19.565125 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Dec 13 03:36:19.565164 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 03:36:19.565202 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 03:36:19.565254 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 03:36:19.565305 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 03:36:19.565350 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.565401 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 03:36:19.565447 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Dec 13 03:36:19.565494 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 03:36:19.565539 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Dec 13 03:36:19.565587 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 03:36:19.565630 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Dec 13 03:36:19.565675 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 03:36:19.565723 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 03:36:19.565767 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Dec 13 03:36:19.565809 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Dec 13 03:36:19.565856 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 03:36:19.565899 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:36:19.565948 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 03:36:19.565992 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:36:19.566038 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 03:36:19.566082 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Dec 13 03:36:19.566124 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 03:36:19.566170 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 03:36:19.566212 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Dec 13 03:36:19.566255 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 03:36:19.566303 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 03:36:19.566347 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Dec 13 03:36:19.566391 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 03:36:19.566437 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 03:36:19.566481 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Dec 13 03:36:19.566526 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Dec 13 03:36:19.566574 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Dec 13 03:36:19.566619 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Dec 13 03:36:19.566662 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Dec 13 03:36:19.566704 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Dec 13 03:36:19.566746 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 03:36:19.566794 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 03:36:19.566837 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.566885 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 03:36:19.566930 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.566979 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 03:36:19.567023 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.567070 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 03:36:19.567113 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.567163 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Dec 13 03:36:19.567209 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.567257 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 03:36:19.567301 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:36:19.567349 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 03:36:19.567401 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 03:36:19.567444 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Dec 13 03:36:19.567487 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 03:36:19.567535 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 03:36:19.567598 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 03:36:19.567649 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 03:36:19.567693 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 03:36:19.567737 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Dec 13 03:36:19.567781 kernel: pci 0000:01:00.0: PME# supported from D3cold Dec 13 03:36:19.567825 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 03:36:19.567867 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 03:36:19.567916 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 03:36:19.567962 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 03:36:19.568006 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Dec 13 03:36:19.568049 kernel: pci 0000:01:00.1: PME# supported from D3cold Dec 13 03:36:19.568093 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 03:36:19.568137 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 03:36:19.568240 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 03:36:19.568285 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 03:36:19.568329 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:36:19.568376 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 03:36:19.568426 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Dec 13 03:36:19.568471 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Dec 13 03:36:19.568516 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Dec 13 03:36:19.568561 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 03:36:19.568605 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Dec 13 03:36:19.568650 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.568695 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 03:36:19.568738 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 03:36:19.568781 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 03:36:19.568829 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 03:36:19.568875 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 03:36:19.568919 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Dec 13 03:36:19.568964 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 03:36:19.569009 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Dec 13 03:36:19.569054 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 03:36:19.569098 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 03:36:19.569141 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 03:36:19.569184 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 03:36:19.569227 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 03:36:19.569275 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 03:36:19.569321 kernel: pci 0000:06:00.0: enabling Extended Tags Dec 13 03:36:19.569371 kernel: pci 0000:06:00.0: supports D1 D2 Dec 13 03:36:19.569416 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 03:36:19.569459 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 03:36:19.569503 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 03:36:19.569546 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 03:36:19.569595 kernel: pci_bus 0000:07: extended config space not accessible Dec 13 03:36:19.569647 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 03:36:19.569696 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Dec 13 03:36:19.569744 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Dec 13 03:36:19.569789 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 03:36:19.569836 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 03:36:19.569882 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 03:36:19.569930 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 03:36:19.569976 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 03:36:19.570023 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 03:36:19.570068 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 03:36:19.570076 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 03:36:19.570081 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 03:36:19.570087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 03:36:19.570092 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 03:36:19.570097 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 03:36:19.570103 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 03:36:19.570108 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 03:36:19.570115 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 03:36:19.570120 kernel: iommu: Default domain type: Translated Dec 13 03:36:19.570126 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 03:36:19.570170 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Dec 13 03:36:19.570218 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 03:36:19.570264 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Dec 13 03:36:19.570272 kernel: vgaarb: loaded Dec 13 03:36:19.570279 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 03:36:19.570285 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 03:36:19.570291 kernel: PTP clock support registered Dec 13 03:36:19.570296 kernel: PCI: Using ACPI for IRQ routing Dec 13 03:36:19.570301 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 03:36:19.570307 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 03:36:19.570312 kernel: e820: reserve RAM buffer [mem 0x8266f000-0x83ffffff] Dec 13 03:36:19.570317 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Dec 13 03:36:19.570322 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Dec 13 03:36:19.570327 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Dec 13 03:36:19.570333 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Dec 13 03:36:19.570339 kernel: clocksource: Switched to clocksource tsc-early Dec 13 03:36:19.570344 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 03:36:19.570349 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 03:36:19.570357 kernel: pnp: PnP ACPI init Dec 13 03:36:19.570402 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 03:36:19.570445 kernel: pnp 00:02: [dma 0 disabled] Dec 13 03:36:19.570489 kernel: pnp 00:03: [dma 0 disabled] Dec 13 03:36:19.570535 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 03:36:19.570575 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 03:36:19.570617 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 03:36:19.570659 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 03:36:19.570698 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 03:36:19.570737 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 03:36:19.570778 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 03:36:19.570816 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 03:36:19.570854 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 03:36:19.570892 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 03:36:19.570931 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 03:36:19.570973 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 03:36:19.571011 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 03:36:19.571052 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 03:36:19.571090 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 03:36:19.571129 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 03:36:19.571167 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 03:36:19.571206 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 03:36:19.571249 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 03:36:19.571257 kernel: pnp: PnP ACPI: found 10 devices Dec 13 03:36:19.571264 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 03:36:19.571269 kernel: NET: Registered PF_INET protocol family Dec 13 03:36:19.571275 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:36:19.571280 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 03:36:19.571286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 03:36:19.571291 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:36:19.571296 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 03:36:19.571302 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 03:36:19.571307 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 03:36:19.571314 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 03:36:19.571319 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 03:36:19.571324 kernel: NET: Registered PF_XDP protocol family Dec 13 03:36:19.571371 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Dec 13 03:36:19.571415 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Dec 13 03:36:19.571458 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Dec 13 03:36:19.571504 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 03:36:19.571549 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 03:36:19.571596 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 03:36:19.571640 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 03:36:19.571684 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 03:36:19.571728 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 03:36:19.571771 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:36:19.571814 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 03:36:19.571858 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 03:36:19.571902 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 03:36:19.571945 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 03:36:19.571988 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 03:36:19.572031 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 03:36:19.572075 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 03:36:19.572117 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 03:36:19.572165 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 03:36:19.572209 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 03:36:19.572253 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 03:36:19.572297 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 03:36:19.572339 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 03:36:19.572386 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 03:36:19.572425 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 03:36:19.572464 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 03:36:19.572501 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 03:36:19.572540 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 03:36:19.572578 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Dec 13 03:36:19.572616 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 03:36:19.572661 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Dec 13 03:36:19.572702 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:36:19.572749 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Dec 13 03:36:19.572790 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Dec 13 03:36:19.572833 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 03:36:19.572874 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Dec 13 03:36:19.572917 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Dec 13 03:36:19.572958 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Dec 13 03:36:19.573000 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 03:36:19.573042 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Dec 13 03:36:19.573051 kernel: PCI: CLS 64 bytes, default 64 Dec 13 03:36:19.573057 kernel: DMAR: No ATSR found Dec 13 03:36:19.573062 kernel: DMAR: No SATC found Dec 13 03:36:19.573067 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 03:36:19.573111 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 03:36:19.573156 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 03:36:19.573200 kernel: pci 0000:00:08.0: Adding to iommu group 2 Dec 13 03:36:19.573243 kernel: pci 0000:00:12.0: Adding to iommu group 3 Dec 13 03:36:19.573289 kernel: pci 0000:00:14.0: Adding to iommu group 4 Dec 13 03:36:19.573331 kernel: pci 0000:00:14.2: Adding to iommu group 4 Dec 13 03:36:19.573377 kernel: pci 0000:00:15.0: Adding to iommu group 5 Dec 13 03:36:19.573419 kernel: pci 0000:00:15.1: Adding to iommu group 5 Dec 13 03:36:19.573462 kernel: pci 0000:00:16.0: Adding to iommu group 6 Dec 13 03:36:19.573504 kernel: pci 0000:00:16.1: Adding to iommu group 6 Dec 13 03:36:19.573547 kernel: pci 0000:00:16.4: Adding to iommu group 6 Dec 13 03:36:19.573590 kernel: pci 0000:00:17.0: Adding to iommu group 7 Dec 13 03:36:19.573633 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Dec 13 03:36:19.573680 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Dec 13 03:36:19.573723 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Dec 13 03:36:19.573767 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Dec 13 03:36:19.573809 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Dec 13 03:36:19.573853 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Dec 13 03:36:19.573895 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Dec 13 03:36:19.573938 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Dec 13 03:36:19.573981 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Dec 13 03:36:19.574027 kernel: pci 0000:01:00.0: Adding to iommu group 1 Dec 13 03:36:19.574074 kernel: pci 0000:01:00.1: Adding to iommu group 1 Dec 13 03:36:19.574118 kernel: pci 0000:03:00.0: Adding to iommu group 15 Dec 13 03:36:19.574163 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 03:36:19.574208 kernel: pci 0000:06:00.0: Adding to iommu group 17 Dec 13 03:36:19.574255 kernel: pci 0000:07:00.0: Adding to iommu group 17 Dec 13 03:36:19.574262 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 03:36:19.574268 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 03:36:19.574275 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Dec 13 03:36:19.574281 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Dec 13 03:36:19.574286 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 03:36:19.574291 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 03:36:19.574297 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 03:36:19.574342 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 03:36:19.574353 kernel: Initialise system trusted keyrings Dec 13 03:36:19.574358 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 03:36:19.574365 kernel: Key type asymmetric registered Dec 13 03:36:19.574370 kernel: Asymmetric key parser 'x509' registered Dec 13 03:36:19.574375 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 03:36:19.574381 kernel: io scheduler mq-deadline registered Dec 13 03:36:19.574386 kernel: io scheduler kyber registered Dec 13 03:36:19.574391 kernel: io scheduler bfq registered Dec 13 03:36:19.574436 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Dec 13 03:36:19.574480 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Dec 13 03:36:19.574523 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Dec 13 03:36:19.574569 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Dec 13 03:36:19.574612 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Dec 13 03:36:19.574657 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Dec 13 03:36:19.574704 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 03:36:19.574712 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 03:36:19.574718 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 03:36:19.574723 kernel: pstore: Registered erst as persistent store backend Dec 13 03:36:19.574730 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 03:36:19.574735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 03:36:19.574741 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 03:36:19.574746 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 03:36:19.574751 kernel: hpet_acpi_add: no address or irqs in _CRS Dec 13 03:36:19.574797 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 03:36:19.574805 kernel: i8042: PNP: No PS/2 controller found. Dec 13 03:36:19.574844 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 03:36:19.574886 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 03:36:19.574926 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T03:36:18 UTC (1734060978) Dec 13 03:36:19.574965 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 03:36:19.574972 kernel: fail to initialize ptp_kvm Dec 13 03:36:19.574978 kernel: intel_pstate: Intel P-state driver initializing Dec 13 03:36:19.574983 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 03:36:19.574989 kernel: intel_pstate: HWP enabled Dec 13 03:36:19.574994 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 03:36:19.575000 kernel: vesafb: scrolling: redraw Dec 13 03:36:19.575006 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 03:36:19.575012 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000c3f29b8f, using 768k, total 768k Dec 13 03:36:19.575017 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 03:36:19.575022 kernel: fb0: VESA VGA frame buffer device Dec 13 03:36:19.575028 kernel: NET: Registered PF_INET6 protocol family Dec 13 03:36:19.575033 kernel: Segment Routing with IPv6 Dec 13 03:36:19.575038 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 03:36:19.575044 kernel: NET: Registered PF_PACKET protocol family Dec 13 03:36:19.575049 kernel: Key type dns_resolver registered Dec 13 03:36:19.575055 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 03:36:19.575060 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 03:36:19.575066 kernel: IPI shorthand broadcast: enabled Dec 13 03:36:19.575071 kernel: sched_clock: Marking stable (1679641949, 1339853755)->(4463999439, -1444503735) Dec 13 03:36:19.575076 kernel: registered taskstats version 1 Dec 13 03:36:19.575082 kernel: Loading compiled-in X.509 certificates Dec 13 03:36:19.575087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 03:36:19.575092 kernel: Key type .fscrypt registered Dec 13 03:36:19.575098 kernel: Key type fscrypt-provisioning registered Dec 13 03:36:19.575104 kernel: pstore: Using crash dump compression: deflate Dec 13 03:36:19.575109 kernel: ima: Allocated hash algorithm: sha1 Dec 13 03:36:19.575115 kernel: ima: No architecture policies found Dec 13 03:36:19.575120 kernel: clk: Disabling unused clocks Dec 13 03:36:19.575125 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 03:36:19.575131 kernel: Write protecting the kernel read-only data: 28672k Dec 13 03:36:19.575136 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 03:36:19.575142 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 03:36:19.575147 kernel: Run /init as init process Dec 13 03:36:19.575153 kernel: with arguments: Dec 13 03:36:19.575159 kernel: /init Dec 13 03:36:19.575164 kernel: with environment: Dec 13 03:36:19.575169 kernel: HOME=/ Dec 13 03:36:19.575174 kernel: TERM=linux Dec 13 03:36:19.575179 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 03:36:19.575186 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:36:19.575193 systemd[1]: Detected architecture x86-64. Dec 13 03:36:19.575199 systemd[1]: Running in initrd. Dec 13 03:36:19.575205 systemd[1]: No hostname configured, using default hostname. Dec 13 03:36:19.575210 systemd[1]: Hostname set to . Dec 13 03:36:19.575215 systemd[1]: Initializing machine ID from random generator. Dec 13 03:36:19.575221 systemd[1]: Queued start job for default target initrd.target. Dec 13 03:36:19.575227 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:36:19.575232 systemd[1]: Reached target cryptsetup.target. Dec 13 03:36:19.575238 systemd[1]: Reached target paths.target. Dec 13 03:36:19.575244 systemd[1]: Reached target slices.target. Dec 13 03:36:19.575249 systemd[1]: Reached target swap.target. Dec 13 03:36:19.575255 systemd[1]: Reached target timers.target. Dec 13 03:36:19.575260 systemd[1]: Listening on iscsid.socket. Dec 13 03:36:19.575266 systemd[1]: Listening on iscsiuio.socket. Dec 13 03:36:19.575272 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 03:36:19.575277 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 03:36:19.575283 systemd[1]: Listening on systemd-journald.socket. Dec 13 03:36:19.575289 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Dec 13 03:36:19.575295 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Dec 13 03:36:19.575300 kernel: clocksource: Switched to clocksource tsc Dec 13 03:36:19.575305 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:36:19.575311 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:36:19.575317 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:36:19.575322 systemd[1]: Reached target sockets.target. Dec 13 03:36:19.575328 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:36:19.575334 systemd[1]: Finished network-cleanup.service. Dec 13 03:36:19.575339 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 03:36:19.575345 systemd[1]: Starting systemd-journald.service... Dec 13 03:36:19.575352 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:36:19.575360 systemd-journald[267]: Journal started Dec 13 03:36:19.575387 systemd-journald[267]: Runtime Journal (/run/log/journal/7993894e29ec4af7924ec1da20819f5c) is 8.0M, max 640.1M, 632.1M free. Dec 13 03:36:19.576492 systemd-modules-load[268]: Inserted module 'overlay' Dec 13 03:36:19.582000 audit: BPF prog-id=6 op=LOAD Dec 13 03:36:19.600404 kernel: audit: type=1334 audit(1734060979.582:2): prog-id=6 op=LOAD Dec 13 03:36:19.600438 systemd[1]: Starting systemd-resolved.service... Dec 13 03:36:19.649407 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 03:36:19.649422 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 03:36:19.682357 kernel: Bridge firewalling registered Dec 13 03:36:19.682387 systemd[1]: Started systemd-journald.service. Dec 13 03:36:19.697228 systemd-modules-load[268]: Inserted module 'br_netfilter' Dec 13 03:36:19.745996 kernel: audit: type=1130 audit(1734060979.705:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.704725 systemd-resolved[270]: Positive Trust Anchors: Dec 13 03:36:19.809398 kernel: SCSI subsystem initialized Dec 13 03:36:19.809409 kernel: audit: type=1130 audit(1734060979.757:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.704731 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:36:19.925432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 03:36:19.925445 kernel: audit: type=1130 audit(1734060979.832:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.925452 kernel: device-mapper: uevent: version 1.0.3 Dec 13 03:36:19.925459 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 03:36:19.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.704750 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:36:19.998610 kernel: audit: type=1130 audit(1734060979.933:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.705630 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:36:20.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.706305 systemd-resolved[270]: Defaulting to hostname 'linux'. Dec 13 03:36:20.107104 kernel: audit: type=1130 audit(1734060980.007:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.107116 kernel: audit: type=1130 audit(1734060980.060:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:19.758514 systemd[1]: Started systemd-resolved.service. Dec 13 03:36:19.832520 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 03:36:19.926283 systemd-modules-load[268]: Inserted module 'dm_multipath' Dec 13 03:36:19.933669 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:36:20.007705 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 03:36:20.060656 systemd[1]: Reached target nss-lookup.target. Dec 13 03:36:20.115978 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 03:36:20.136936 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:36:20.137226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:36:20.140085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:36:20.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.140951 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:36:20.189559 kernel: audit: type=1130 audit(1734060980.138:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.202708 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 03:36:20.269391 kernel: audit: type=1130 audit(1734060980.202:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.260980 systemd[1]: Starting dracut-cmdline.service... Dec 13 03:36:20.284459 dracut-cmdline[292]: dracut-dracut-053 Dec 13 03:36:20.284459 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 03:36:20.284459 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:36:20.352447 kernel: Loading iSCSI transport class v2.0-870. Dec 13 03:36:20.352460 kernel: iscsi: registered transport (tcp) Dec 13 03:36:20.407904 kernel: iscsi: registered transport (qla4xxx) Dec 13 03:36:20.407923 kernel: QLogic iSCSI HBA Driver Dec 13 03:36:20.424305 systemd[1]: Finished dracut-cmdline.service. Dec 13 03:36:20.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:20.424856 systemd[1]: Starting dracut-pre-udev.service... Dec 13 03:36:20.481427 kernel: raid6: avx2x4 gen() 45660 MB/s Dec 13 03:36:20.516389 kernel: raid6: avx2x4 xor() 21780 MB/s Dec 13 03:36:20.551436 kernel: raid6: avx2x2 gen() 53646 MB/s Dec 13 03:36:20.586426 kernel: raid6: avx2x2 xor() 32113 MB/s Dec 13 03:36:20.621435 kernel: raid6: avx2x1 gen() 45092 MB/s Dec 13 03:36:20.655424 kernel: raid6: avx2x1 xor() 27824 MB/s Dec 13 03:36:20.689432 kernel: raid6: sse2x4 gen() 21301 MB/s Dec 13 03:36:20.723391 kernel: raid6: sse2x4 xor() 11984 MB/s Dec 13 03:36:20.757388 kernel: raid6: sse2x2 gen() 21681 MB/s Dec 13 03:36:20.791425 kernel: raid6: sse2x2 xor() 13417 MB/s Dec 13 03:36:20.825395 kernel: raid6: sse2x1 gen() 18292 MB/s Dec 13 03:36:20.877360 kernel: raid6: sse2x1 xor() 8915 MB/s Dec 13 03:36:20.877375 kernel: raid6: using algorithm avx2x2 gen() 53646 MB/s Dec 13 03:36:20.877383 kernel: raid6: .... xor() 32113 MB/s, rmw enabled Dec 13 03:36:20.895608 kernel: raid6: using avx2x2 recovery algorithm Dec 13 03:36:20.942369 kernel: xor: automatically using best checksumming function avx Dec 13 03:36:21.021386 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 03:36:21.026335 systemd[1]: Finished dracut-pre-udev.service. Dec 13 03:36:21.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:21.034000 audit: BPF prog-id=7 op=LOAD Dec 13 03:36:21.034000 audit: BPF prog-id=8 op=LOAD Dec 13 03:36:21.035371 systemd[1]: Starting systemd-udevd.service... Dec 13 03:36:21.043749 systemd-udevd[473]: Using default interface naming scheme 'v252'. Dec 13 03:36:21.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:21.048436 systemd[1]: Started systemd-udevd.service. Dec 13 03:36:21.088501 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Dec 13 03:36:21.065450 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 03:36:21.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:21.091764 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 03:36:21.105035 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:36:21.154788 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:36:21.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:21.182367 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 03:36:21.183364 kernel: libata version 3.00 loaded. Dec 13 03:36:21.219017 kernel: ACPI: bus type USB registered Dec 13 03:36:21.219070 kernel: usbcore: registered new interface driver usbfs Dec 13 03:36:21.237010 kernel: usbcore: registered new interface driver hub Dec 13 03:36:21.237032 kernel: usbcore: registered new device driver usb Dec 13 03:36:21.255358 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 03:36:21.288312 kernel: AES CTR mode by8 optimization enabled Dec 13 03:36:21.323642 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 03:36:21.323662 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 03:36:21.325960 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 03:36:21.355616 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 03:36:21.683875 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Dec 13 03:36:21.948653 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Dec 13 03:36:21.948761 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 03:36:21.948809 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 03:36:21.948858 kernel: scsi host0: ahci Dec 13 03:36:21.948917 kernel: scsi host1: ahci Dec 13 03:36:21.948969 kernel: scsi host2: ahci Dec 13 03:36:21.949018 kernel: scsi host3: ahci Dec 13 03:36:21.949071 kernel: scsi host4: ahci Dec 13 03:36:21.949119 kernel: scsi host5: ahci Dec 13 03:36:21.949168 kernel: scsi host6: ahci Dec 13 03:36:21.949218 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Dec 13 03:36:21.949226 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Dec 13 03:36:21.949233 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Dec 13 03:36:21.949239 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Dec 13 03:36:21.949245 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Dec 13 03:36:21.949251 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Dec 13 03:36:21.949257 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Dec 13 03:36:21.949264 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 03:36:21.949313 kernel: pps pps0: new PPS source ptp0 Dec 13 03:36:21.949372 kernel: igb 0000:03:00.0: added PHC on eth0 Dec 13 03:36:21.949426 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 03:36:21.949473 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:54 Dec 13 03:36:21.949521 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Dec 13 03:36:21.949568 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 03:36:21.949615 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 03:36:21.949660 kernel: pps pps1: new PPS source ptp1 Dec 13 03:36:21.949712 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 03:36:21.949760 kernel: igb 0000:04:00.0: added PHC on eth1 Dec 13 03:36:21.949809 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 03:36:21.949855 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 03:36:21.949902 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 03:36:21.949947 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:55 Dec 13 03:36:21.949994 kernel: hub 1-0:1.0: USB hub found Dec 13 03:36:21.950055 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Dec 13 03:36:21.950103 kernel: hub 1-0:1.0: 16 ports detected Dec 13 03:36:21.950153 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 03:36:21.950201 kernel: hub 2-0:1.0: USB hub found Dec 13 03:36:21.950256 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 03:36:21.950303 kernel: hub 2-0:1.0: 10 ports detected Dec 13 03:36:21.950356 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 03:36:21.950364 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 03:36:21.950396 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 03:36:21.950402 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 03:36:21.950425 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 03:36:21.950431 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 03:36:21.950437 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 03:36:21.950443 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 03:36:21.950450 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 03:36:21.950456 kernel: ata2.00: Features: NCQ-prio Dec 13 03:36:21.950462 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 03:36:21.950469 kernel: ata1.00: Features: NCQ-prio Dec 13 03:36:21.950475 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 03:36:21.950481 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 03:36:21.950530 kernel: ata2.00: configured for UDMA/133 Dec 13 03:36:21.950537 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 03:36:21.950551 kernel: ata1.00: configured for UDMA/133 Dec 13 03:36:21.950557 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 03:36:21.950606 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 03:36:21.950620 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Dec 13 03:36:22.572327 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 03:36:22.572411 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 03:36:22.572470 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Dec 13 03:36:22.572524 kernel: hub 1-14:1.0: USB hub found Dec 13 03:36:22.572587 kernel: hub 1-14:1.0: 4 ports detected Dec 13 03:36:22.572645 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Dec 13 03:36:22.572696 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:36:22.572704 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 03:36:22.572753 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:36:22.572761 kernel: port_module: 9 callbacks suppressed Dec 13 03:36:22.572767 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Dec 13 03:36:22.572816 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 03:36:22.754054 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 03:36:22.754222 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Dec 13 03:36:22.754317 kernel: sd 1:0:0:0: [sdb] Write Protect is off Dec 13 03:36:22.754421 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 03:36:22.754496 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 03:36:22.754606 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:36:22.754619 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:36:22.754631 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Dec 13 03:36:22.754716 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 03:36:22.754816 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 03:36:22.754880 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 03:36:22.754937 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 03:36:22.754994 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 03:36:22.755063 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 03:36:22.755181 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:36:22.755191 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 03:36:22.755249 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 03:36:22.755256 kernel: GPT:9289727 != 937703087 Dec 13 03:36:22.755264 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 03:36:22.755271 kernel: GPT:9289727 != 937703087 Dec 13 03:36:22.755277 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 03:36:22.755283 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:36:22.755290 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 03:36:22.755296 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:36:22.755302 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 03:36:22.773359 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Dec 13 03:36:22.782889 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 03:36:22.860488 kernel: usbcore: registered new interface driver usbhid Dec 13 03:36:22.860512 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (525) Dec 13 03:36:22.860526 kernel: usbhid: USB HID core driver Dec 13 03:36:22.860535 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 03:36:22.849982 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 03:36:22.896488 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Dec 13 03:36:22.884618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 03:36:22.912363 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 03:36:23.029395 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 03:36:23.029480 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 03:36:23.029489 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 03:36:23.021674 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:36:23.038921 systemd[1]: Starting disk-uuid.service... Dec 13 03:36:23.098610 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:36:23.098620 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:36:23.098627 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:36:23.098692 disk-uuid[687]: Primary Header is updated. Dec 13 03:36:23.098692 disk-uuid[687]: Secondary Entries is updated. Dec 13 03:36:23.098692 disk-uuid[687]: Secondary Header is updated. Dec 13 03:36:23.148390 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:36:23.148401 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:36:23.148408 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:36:24.135172 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:36:24.154321 disk-uuid[688]: The operation has completed successfully. Dec 13 03:36:24.163571 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:36:24.192523 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 03:36:24.289095 kernel: audit: type=1130 audit(1734060984.199:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.289113 kernel: audit: type=1131 audit(1734060984.199:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.192568 systemd[1]: Finished disk-uuid.service. Dec 13 03:36:24.319394 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 03:36:24.202814 systemd[1]: Starting verity-setup.service... Dec 13 03:36:24.353287 systemd[1]: Found device dev-mapper-usr.device. Dec 13 03:36:24.362341 systemd[1]: Mounting sysusr-usr.mount... Dec 13 03:36:24.381202 systemd[1]: Finished verity-setup.service. Dec 13 03:36:24.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.436367 kernel: audit: type=1130 audit(1734060984.388:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.493048 systemd[1]: Mounted sysusr-usr.mount. Dec 13 03:36:24.508556 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 03:36:24.500649 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 03:36:24.501037 systemd[1]: Starting ignition-setup.service... Dec 13 03:36:24.592515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:36:24.592530 kernel: BTRFS info (device sda6): using free space tree Dec 13 03:36:24.592540 kernel: BTRFS info (device sda6): has skinny extents Dec 13 03:36:24.592547 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 03:36:24.533793 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 03:36:24.600804 systemd[1]: Finished ignition-setup.service. Dec 13 03:36:24.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.609767 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 03:36:24.717075 kernel: audit: type=1130 audit(1734060984.609:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.717091 kernel: audit: type=1130 audit(1734060984.667:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.668046 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 03:36:24.748185 kernel: audit: type=1334 audit(1734060984.724:24): prog-id=9 op=LOAD Dec 13 03:36:24.724000 audit: BPF prog-id=9 op=LOAD Dec 13 03:36:24.726146 systemd[1]: Starting systemd-networkd.service... Dec 13 03:36:24.762497 systemd-networkd[875]: lo: Link UP Dec 13 03:36:24.762500 systemd-networkd[875]: lo: Gained carrier Dec 13 03:36:24.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.809550 ignition[868]: Ignition 2.14.0 Dec 13 03:36:24.842592 kernel: audit: type=1130 audit(1734060984.776:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.762805 systemd-networkd[875]: Enumeration completed Dec 13 03:36:24.809555 ignition[868]: Stage: fetch-offline Dec 13 03:36:24.762882 systemd[1]: Started systemd-networkd.service. Dec 13 03:36:24.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.809582 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:36:25.000225 kernel: audit: type=1130 audit(1734060984.867:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:25.000242 kernel: audit: type=1130 audit(1734060984.926:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:25.000250 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 03:36:24.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.763460 systemd-networkd[875]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:36:24.809595 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:36:25.025390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Dec 13 03:36:24.777479 systemd[1]: Reached target network.target. Dec 13 03:36:24.818045 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:36:25.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.822042 unknown[868]: fetched base config from "system" Dec 13 03:36:25.082437 iscsid[900]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:36:25.082437 iscsid[900]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 03:36:25.082437 iscsid[900]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 03:36:25.082437 iscsid[900]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 03:36:25.082437 iscsid[900]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 03:36:25.082437 iscsid[900]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:36:25.082437 iscsid[900]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 03:36:25.236452 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 03:36:25.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.818109 ignition[868]: parsed url from cmdline: "" Dec 13 03:36:24.822046 unknown[868]: fetched user config from "system" Dec 13 03:36:25.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:24.818111 ignition[868]: no config URL provided Dec 13 03:36:24.836064 systemd[1]: Starting iscsiuio.service... Dec 13 03:36:24.818114 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:36:24.849639 systemd[1]: Started iscsiuio.service. Dec 13 03:36:24.818137 ignition[868]: parsing config with SHA512: c99a41a7893d7c7986c3b23875cdb0a701d285159a528b556753dc3979e31eb178e1d800acda2bf6f418f20ec13651803822d83221db4e57011429c0d8634d1f Dec 13 03:36:24.867625 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 03:36:24.822324 ignition[868]: fetch-offline: fetch-offline passed Dec 13 03:36:24.926733 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 03:36:24.822326 ignition[868]: POST message to Packet Timeline Dec 13 03:36:24.947301 systemd[1]: Starting ignition-kargs.service... Dec 13 03:36:24.822330 ignition[868]: POST Status error: resource requires networking Dec 13 03:36:25.001008 systemd-networkd[875]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:36:24.822371 ignition[868]: Ignition finished successfully Dec 13 03:36:25.014955 systemd[1]: Starting iscsid.service... Dec 13 03:36:25.004748 ignition[889]: Ignition 2.14.0 Dec 13 03:36:25.042670 systemd[1]: Started iscsid.service. Dec 13 03:36:25.004751 ignition[889]: Stage: kargs Dec 13 03:36:25.056888 systemd[1]: Starting dracut-initqueue.service... Dec 13 03:36:25.004808 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:36:25.075518 systemd[1]: Finished dracut-initqueue.service. Dec 13 03:36:25.004817 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:36:25.090426 systemd[1]: Reached target remote-fs-pre.target. Dec 13 03:36:25.006136 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:36:25.136545 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:36:25.008203 ignition[889]: kargs: kargs passed Dec 13 03:36:25.157755 systemd[1]: Reached target remote-fs.target. Dec 13 03:36:25.008207 ignition[889]: POST message to Packet Timeline Dec 13 03:36:25.176795 systemd[1]: Starting dracut-pre-mount.service... Dec 13 03:36:25.008217 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:36:25.214542 systemd[1]: Finished dracut-pre-mount.service. Dec 13 03:36:25.013512 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57884->[::1]:53: read: connection refused Dec 13 03:36:25.226588 systemd-networkd[875]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:36:25.213865 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 03:36:25.254751 systemd-networkd[875]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:36:25.214247 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47875->[::1]:53: read: connection refused Dec 13 03:36:25.283644 systemd-networkd[875]: enp1s0f1np1: Link UP Dec 13 03:36:25.283890 systemd-networkd[875]: enp1s0f1np1: Gained carrier Dec 13 03:36:25.305897 systemd-networkd[875]: enp1s0f0np0: Link UP Dec 13 03:36:25.306432 systemd-networkd[875]: eno2: Link UP Dec 13 03:36:25.306923 systemd-networkd[875]: eno1: Link UP Dec 13 03:36:25.615590 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 03:36:25.617133 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45660->[::1]:53: read: connection refused Dec 13 03:36:26.057182 systemd-networkd[875]: enp1s0f0np0: Gained carrier Dec 13 03:36:26.065610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Dec 13 03:36:26.093677 systemd-networkd[875]: enp1s0f0np0: DHCPv4 address 147.75.202.71/31, gateway 147.75.202.70 acquired from 145.40.83.140 Dec 13 03:36:26.417684 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 03:36:26.418827 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39263->[::1]:53: read: connection refused Dec 13 03:36:27.121945 systemd-networkd[875]: enp1s0f1np1: Gained IPv6LL Dec 13 03:36:27.122922 systemd-networkd[875]: enp1s0f0np0: Gained IPv6LL Dec 13 03:36:28.020751 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 03:36:28.021966 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43524->[::1]:53: read: connection refused Dec 13 03:36:31.225418 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 03:36:32.168389 ignition[889]: GET result: OK Dec 13 03:36:32.570039 ignition[889]: Ignition finished successfully Dec 13 03:36:32.574910 systemd[1]: Finished ignition-kargs.service. Dec 13 03:36:32.663220 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 03:36:32.663236 kernel: audit: type=1130 audit(1734060992.585:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:32.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:32.595201 ignition[923]: Ignition 2.14.0 Dec 13 03:36:32.587640 systemd[1]: Starting ignition-disks.service... Dec 13 03:36:32.595204 ignition[923]: Stage: disks Dec 13 03:36:32.595273 ignition[923]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:36:32.595282 ignition[923]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:36:32.596763 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:36:32.598358 ignition[923]: disks: disks passed Dec 13 03:36:32.598361 ignition[923]: POST message to Packet Timeline Dec 13 03:36:32.598372 ignition[923]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:36:33.337205 ignition[923]: GET result: OK Dec 13 03:36:33.657911 ignition[923]: Ignition finished successfully Dec 13 03:36:33.659246 systemd[1]: Finished ignition-disks.service. Dec 13 03:36:33.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:33.673828 systemd[1]: Reached target initrd-root-device.target. Dec 13 03:36:33.752656 kernel: audit: type=1130 audit(1734060993.673:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:33.737607 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:36:33.737639 systemd[1]: Reached target local-fs.target. Dec 13 03:36:33.761595 systemd[1]: Reached target sysinit.target. Dec 13 03:36:33.775562 systemd[1]: Reached target basic.target. Dec 13 03:36:33.776169 systemd[1]: Starting systemd-fsck-root.service... Dec 13 03:36:33.803191 systemd-fsck[939]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 03:36:33.821006 systemd[1]: Finished systemd-fsck-root.service. Dec 13 03:36:33.912551 kernel: audit: type=1130 audit(1734060993.829:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:33.912640 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 03:36:33.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:33.834760 systemd[1]: Mounting sysroot.mount... Dec 13 03:36:33.920013 systemd[1]: Mounted sysroot.mount. Dec 13 03:36:33.933635 systemd[1]: Reached target initrd-root-fs.target. Dec 13 03:36:33.941288 systemd[1]: Mounting sysroot-usr.mount... Dec 13 03:36:33.963345 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 03:36:33.969924 systemd[1]: Starting flatcar-static-network.service... Dec 13 03:36:33.991575 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 03:36:33.991608 systemd[1]: Reached target ignition-diskful.target. Dec 13 03:36:34.011785 systemd[1]: Mounted sysroot-usr.mount. Dec 13 03:36:34.035039 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:36:34.111456 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (952) Dec 13 03:36:34.111491 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:36:34.047398 systemd[1]: Starting initrd-setup-root.service... Dec 13 03:36:34.181338 kernel: BTRFS info (device sda6): using free space tree Dec 13 03:36:34.181357 kernel: BTRFS info (device sda6): has skinny extents Dec 13 03:36:34.181366 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 03:36:34.116736 systemd[1]: Finished initrd-setup-root.service. Dec 13 03:36:34.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:34.243564 coreos-metadata[947]: Dec 13 03:36:34.120 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:36:34.264611 kernel: audit: type=1130 audit(1734060994.190:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:34.264625 coreos-metadata[946]: Dec 13 03:36:34.120 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:36:34.284596 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 03:36:34.191648 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:36:34.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:34.336579 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Dec 13 03:36:34.369596 kernel: audit: type=1130 audit(1734060994.302:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:34.251952 systemd[1]: Starting ignition-mount.service... Dec 13 03:36:34.377622 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 03:36:34.271941 systemd[1]: Starting sysroot-boot.service... Dec 13 03:36:34.396607 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 03:36:34.291831 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 03:36:34.416579 ignition[1021]: INFO : Ignition 2.14.0 Dec 13 03:36:34.416579 ignition[1021]: INFO : Stage: mount Dec 13 03:36:34.416579 ignition[1021]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:36:34.416579 ignition[1021]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:36:34.416579 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:36:34.416579 ignition[1021]: INFO : mount: mount passed Dec 13 03:36:34.416579 ignition[1021]: INFO : POST message to Packet Timeline Dec 13 03:36:34.416579 ignition[1021]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:36:34.291876 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 03:36:34.295407 systemd[1]: Finished sysroot-boot.service. Dec 13 03:36:34.955322 coreos-metadata[947]: Dec 13 03:36:34.955 INFO Fetch successful Dec 13 03:36:35.031203 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 03:36:35.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.031259 systemd[1]: Finished flatcar-static-network.service. Dec 13 03:36:35.153504 kernel: audit: type=1130 audit(1734060995.039:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.153518 kernel: audit: type=1131 audit(1734060995.039:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.112112 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 03:36:35.228580 kernel: audit: type=1130 audit(1734060995.162:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.228616 coreos-metadata[946]: Dec 13 03:36:35.085 INFO Fetch successful Dec 13 03:36:35.228616 coreos-metadata[946]: Dec 13 03:36:35.111 INFO wrote hostname ci-3510.3.6-a-ab200a80e9 to /sysroot/etc/hostname Dec 13 03:36:35.492570 ignition[1021]: INFO : GET result: OK Dec 13 03:36:35.877476 ignition[1021]: INFO : Ignition finished successfully Dec 13 03:36:35.880067 systemd[1]: Finished ignition-mount.service. Dec 13 03:36:35.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.897463 systemd[1]: Starting ignition-files.service... Dec 13 03:36:35.966579 kernel: audit: type=1130 audit(1734060995.895:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:35.961314 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:36:36.014448 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1036) Dec 13 03:36:36.014459 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:36:36.048724 kernel: BTRFS info (device sda6): using free space tree Dec 13 03:36:36.048740 kernel: BTRFS info (device sda6): has skinny extents Dec 13 03:36:36.098356 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 03:36:36.099552 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:36:36.116520 ignition[1055]: INFO : Ignition 2.14.0 Dec 13 03:36:36.116520 ignition[1055]: INFO : Stage: files Dec 13 03:36:36.116520 ignition[1055]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:36:36.116520 ignition[1055]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:36:36.116520 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:36:36.116520 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Dec 13 03:36:36.120083 unknown[1055]: wrote ssh authorized keys file for user: core Dec 13 03:36:36.194625 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 03:36:36.194625 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 03:36:36.194625 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 03:36:36.194625 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 03:36:36.194625 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 03:36:36.194625 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:36:36.194625 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 03:36:36.194625 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 03:36:36.301604 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:36:36.301604 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:36:36.301604 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 03:36:36.839762 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 03:36:36.905944 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:36:36.905944 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 03:36:36.953661 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1075) Dec 13 03:36:36.941348 systemd[1]: mnt-oem4191968568.mount: Deactivated successfully. Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 03:36:36.962608 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4191968568" Dec 13 03:36:36.962608 ignition[1055]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4191968568": device or resource busy Dec 13 03:36:37.225717 ignition[1055]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4191968568", trying btrfs: device or resource busy Dec 13 03:36:37.225717 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4191968568" Dec 13 03:36:37.225717 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4191968568" Dec 13 03:36:37.225717 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem4191968568" Dec 13 03:36:37.225717 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem4191968568" Dec 13 03:36:37.225717 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 03:36:37.225717 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:36:37.225717 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 03:36:37.415138 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Dec 13 03:36:38.278525 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:36:38.278525 ignition[1055]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:36:38.278525 ignition[1055]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:36:38.278525 ignition[1055]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Dec 13 03:36:38.278525 ignition[1055]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Dec 13 03:36:38.278525 ignition[1055]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Dec 13 03:36:38.278525 ignition[1055]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:36:38.379669 ignition[1055]: INFO : files: files passed Dec 13 03:36:38.379669 ignition[1055]: INFO : POST message to Packet Timeline Dec 13 03:36:38.379669 ignition[1055]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:36:39.204074 ignition[1055]: INFO : GET result: OK Dec 13 03:36:39.590589 ignition[1055]: INFO : Ignition finished successfully Dec 13 03:36:39.593396 systemd[1]: Finished ignition-files.service. Dec 13 03:36:39.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.614125 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 03:36:39.685602 kernel: audit: type=1130 audit(1734060999.608:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.675607 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 03:36:39.709672 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 03:36:39.777460 kernel: audit: type=1130 audit(1734060999.719:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.675930 systemd[1]: Starting ignition-quench.service... Dec 13 03:36:39.898547 kernel: audit: type=1130 audit(1734060999.785:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.898635 kernel: audit: type=1131 audit(1734060999.785:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.692715 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 03:36:39.719751 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 03:36:39.719814 systemd[1]: Finished ignition-quench.service. Dec 13 03:36:40.055473 kernel: audit: type=1130 audit(1734060999.941:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.055562 kernel: audit: type=1131 audit(1734060999.941:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.785624 systemd[1]: Reached target ignition-complete.target. Dec 13 03:36:39.907939 systemd[1]: Starting initrd-parse-etc.service... Dec 13 03:36:39.929143 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 03:36:40.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.929189 systemd[1]: Finished initrd-parse-etc.service. Dec 13 03:36:40.175536 kernel: audit: type=1130 audit(1734061000.103:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:39.941644 systemd[1]: Reached target initrd-fs.target. Dec 13 03:36:40.063572 systemd[1]: Reached target initrd.target. Dec 13 03:36:40.063629 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 03:36:40.063981 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 03:36:40.085691 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 03:36:40.311398 kernel: audit: type=1131 audit(1734061000.242:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.104157 systemd[1]: Starting initrd-cleanup.service... Dec 13 03:36:40.179518 systemd[1]: Stopped target nss-lookup.target. Dec 13 03:36:40.190751 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 03:36:40.206003 systemd[1]: Stopped target timers.target. Dec 13 03:36:40.226052 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 03:36:40.226435 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 03:36:40.243238 systemd[1]: Stopped target initrd.target. Dec 13 03:36:40.318659 systemd[1]: Stopped target basic.target. Dec 13 03:36:40.332617 systemd[1]: Stopped target ignition-complete.target. Dec 13 03:36:40.339676 systemd[1]: Stopped target ignition-diskful.target. Dec 13 03:36:40.363043 systemd[1]: Stopped target initrd-root-device.target. Dec 13 03:36:40.378944 systemd[1]: Stopped target remote-fs.target. Dec 13 03:36:40.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.396940 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 03:36:40.583604 kernel: audit: type=1131 audit(1734061000.495:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.412072 systemd[1]: Stopped target sysinit.target. Dec 13 03:36:40.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.428074 systemd[1]: Stopped target local-fs.target. Dec 13 03:36:40.669603 kernel: audit: type=1131 audit(1734061000.593:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.444054 systemd[1]: Stopped target local-fs-pre.target. Dec 13 03:36:40.462045 systemd[1]: Stopped target swap.target. Dec 13 03:36:40.478825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 03:36:40.479189 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 03:36:40.496168 systemd[1]: Stopped target cryptsetup.target. Dec 13 03:36:40.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.574634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 03:36:40.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.574717 systemd[1]: Stopped dracut-initqueue.service. Dec 13 03:36:40.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.593726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 03:36:40.801576 ignition[1102]: INFO : Ignition 2.14.0 Dec 13 03:36:40.801576 ignition[1102]: INFO : Stage: umount Dec 13 03:36:40.801576 ignition[1102]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:36:40.801576 ignition[1102]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:36:40.801576 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:36:40.801576 ignition[1102]: INFO : umount: umount passed Dec 13 03:36:40.801576 ignition[1102]: INFO : POST message to Packet Timeline Dec 13 03:36:40.801576 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:36:40.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.593797 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 03:36:40.953679 iscsid[900]: iscsid shutting down. Dec 13 03:36:40.662801 systemd[1]: Stopped target paths.target. Dec 13 03:36:40.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.677606 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 03:36:40.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:40.682598 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 03:36:40.684679 systemd[1]: Stopped target slices.target. Dec 13 03:36:40.706745 systemd[1]: Stopped target sockets.target. Dec 13 03:36:40.724795 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 03:36:40.724932 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 03:36:40.742980 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 03:36:40.743201 systemd[1]: Stopped ignition-files.service. Dec 13 03:36:40.760043 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 03:36:40.760414 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 03:36:40.779082 systemd[1]: Stopping ignition-mount.service... Dec 13 03:36:40.791561 systemd[1]: Stopping iscsid.service... Dec 13 03:36:40.808526 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 03:36:40.808626 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 03:36:40.824472 systemd[1]: Stopping sysroot-boot.service... Dec 13 03:36:40.838513 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 03:36:40.838723 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 03:36:40.858976 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 03:36:40.859306 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 03:36:40.892798 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 03:36:40.893112 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 03:36:40.893158 systemd[1]: Stopped iscsid.service. Dec 13 03:36:40.904756 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 03:36:40.904802 systemd[1]: Stopped sysroot-boot.service. Dec 13 03:36:40.920853 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 03:36:40.920922 systemd[1]: Closed iscsid.socket. Dec 13 03:36:40.944679 systemd[1]: Stopping iscsiuio.service... Dec 13 03:36:40.961087 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 03:36:40.961321 systemd[1]: Stopped iscsiuio.service. Dec 13 03:36:40.975145 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 03:36:40.975371 systemd[1]: Finished initrd-cleanup.service. Dec 13 03:36:40.991865 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 03:36:40.991965 systemd[1]: Closed iscsiuio.socket. Dec 13 03:36:41.687342 ignition[1102]: INFO : GET result: OK Dec 13 03:36:42.030588 ignition[1102]: INFO : Ignition finished successfully Dec 13 03:36:42.032818 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 03:36:42.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.033002 systemd[1]: Stopped ignition-mount.service. Dec 13 03:36:42.049945 systemd[1]: Stopped target network.target. Dec 13 03:36:42.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.065565 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 03:36:42.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.065793 systemd[1]: Stopped ignition-disks.service. Dec 13 03:36:42.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.080770 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 03:36:42.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.080891 systemd[1]: Stopped ignition-kargs.service. Dec 13 03:36:42.095694 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 03:36:42.095828 systemd[1]: Stopped ignition-setup.service. Dec 13 03:36:42.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.111704 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 03:36:42.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.190000 audit: BPF prog-id=6 op=UNLOAD Dec 13 03:36:42.111852 systemd[1]: Stopped initrd-setup-root.service. Dec 13 03:36:42.127134 systemd[1]: Stopping systemd-networkd.service... Dec 13 03:36:42.137489 systemd-networkd[875]: enp1s0f0np0: DHCPv6 lease lost Dec 13 03:36:42.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.141809 systemd[1]: Stopping systemd-resolved.service... Dec 13 03:36:42.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.150496 systemd-networkd[875]: enp1s0f1np1: DHCPv6 lease lost Dec 13 03:36:42.262000 audit: BPF prog-id=9 op=UNLOAD Dec 13 03:36:42.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.157167 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 03:36:42.157442 systemd[1]: Stopped systemd-resolved.service. Dec 13 03:36:42.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.175095 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 03:36:42.175333 systemd[1]: Stopped systemd-networkd.service. Dec 13 03:36:42.190142 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 03:36:42.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.190228 systemd[1]: Closed systemd-networkd.socket. Dec 13 03:36:42.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.209125 systemd[1]: Stopping network-cleanup.service... Dec 13 03:36:42.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.222583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 03:36:42.222739 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 03:36:42.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.238734 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:36:42.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.238876 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:36:42.255034 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 03:36:42.255182 systemd[1]: Stopped systemd-modules-load.service. Dec 13 03:36:42.270900 systemd[1]: Stopping systemd-udevd.service... Dec 13 03:36:42.289376 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 03:36:42.290411 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 03:36:42.290469 systemd[1]: Stopped systemd-udevd.service. Dec 13 03:36:42.294726 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 03:36:42.294751 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 03:36:42.315588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 03:36:42.315616 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 03:36:42.331539 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 03:36:42.331585 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 03:36:42.346667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 03:36:42.346746 systemd[1]: Stopped dracut-cmdline.service. Dec 13 03:36:42.362462 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 03:36:42.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:42.362489 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 03:36:42.377901 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 03:36:42.392445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 03:36:42.392500 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 03:36:42.411365 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 03:36:42.411561 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 03:36:42.553305 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 03:36:42.553599 systemd[1]: Stopped network-cleanup.service. Dec 13 03:36:42.567978 systemd[1]: Reached target initrd-switch-root.target. Dec 13 03:36:42.589437 systemd[1]: Starting initrd-switch-root.service... Dec 13 03:36:42.609990 systemd[1]: Switching root. Dec 13 03:36:42.658626 systemd-journald[267]: Journal stopped Dec 13 03:36:46.650602 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Dec 13 03:36:46.650617 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 03:36:46.650625 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 03:36:46.650631 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 03:36:46.650636 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 03:36:46.650641 kernel: SELinux: policy capability open_perms=1 Dec 13 03:36:46.650647 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 03:36:46.650653 kernel: SELinux: policy capability always_check_network=0 Dec 13 03:36:46.650658 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 03:36:46.650664 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 03:36:46.650670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 03:36:46.650675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 03:36:46.650680 systemd[1]: Successfully loaded SELinux policy in 305.017ms. Dec 13 03:36:46.650687 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.971ms. Dec 13 03:36:46.650695 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:36:46.650701 systemd[1]: Detected architecture x86-64. Dec 13 03:36:46.650707 systemd[1]: Detected first boot. Dec 13 03:36:46.650713 systemd[1]: Hostname set to . Dec 13 03:36:46.650719 systemd[1]: Initializing machine ID from random generator. Dec 13 03:36:46.650725 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 03:36:46.650731 systemd[1]: Populated /etc with preset unit settings. Dec 13 03:36:46.650738 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:36:46.650745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:36:46.650752 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:36:46.650758 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 13 03:36:46.650763 kernel: audit: type=1334 audit(1734061004.965:92): prog-id=12 op=LOAD Dec 13 03:36:46.650769 kernel: audit: type=1334 audit(1734061004.965:93): prog-id=3 op=UNLOAD Dec 13 03:36:46.650775 kernel: audit: type=1334 audit(1734061005.010:94): prog-id=13 op=LOAD Dec 13 03:36:46.650781 kernel: audit: type=1334 audit(1734061005.055:95): prog-id=14 op=LOAD Dec 13 03:36:46.650786 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 03:36:46.650792 kernel: audit: type=1334 audit(1734061005.055:96): prog-id=4 op=UNLOAD Dec 13 03:36:46.650798 systemd[1]: Stopped initrd-switch-root.service. Dec 13 03:36:46.650804 kernel: audit: type=1334 audit(1734061005.055:97): prog-id=5 op=UNLOAD Dec 13 03:36:46.650810 kernel: audit: type=1131 audit(1734061005.055:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.650815 kernel: audit: type=1130 audit(1734061005.222:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.650822 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 03:36:46.650829 kernel: audit: type=1131 audit(1734061005.222:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.650835 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 03:36:46.650841 kernel: audit: type=1334 audit(1734061005.365:101): prog-id=12 op=UNLOAD Dec 13 03:36:46.650847 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 03:36:46.650855 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 03:36:46.650861 systemd[1]: Created slice system-getty.slice. Dec 13 03:36:46.650868 systemd[1]: Created slice system-modprobe.slice. Dec 13 03:36:46.650875 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 03:36:46.650881 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 03:36:46.650888 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 03:36:46.650894 systemd[1]: Created slice user.slice. Dec 13 03:36:46.650900 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:36:46.650906 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 03:36:46.650913 systemd[1]: Set up automount boot.automount. Dec 13 03:36:46.650920 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 03:36:46.650927 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 03:36:46.650933 systemd[1]: Stopped target initrd-fs.target. Dec 13 03:36:46.650939 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 03:36:46.650946 systemd[1]: Reached target integritysetup.target. Dec 13 03:36:46.650952 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:36:46.650958 systemd[1]: Reached target remote-fs.target. Dec 13 03:36:46.650965 systemd[1]: Reached target slices.target. Dec 13 03:36:46.650971 systemd[1]: Reached target swap.target. Dec 13 03:36:46.650978 systemd[1]: Reached target torcx.target. Dec 13 03:36:46.650984 systemd[1]: Reached target veritysetup.target. Dec 13 03:36:46.650991 systemd[1]: Listening on systemd-coredump.socket. Dec 13 03:36:46.650997 systemd[1]: Listening on systemd-initctl.socket. Dec 13 03:36:46.651003 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:36:46.651010 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:36:46.651017 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:36:46.651023 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 03:36:46.651030 systemd[1]: Mounting dev-hugepages.mount... Dec 13 03:36:46.651036 systemd[1]: Mounting dev-mqueue.mount... Dec 13 03:36:46.651043 systemd[1]: Mounting media.mount... Dec 13 03:36:46.651049 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:36:46.651056 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 03:36:46.651062 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 03:36:46.651070 systemd[1]: Mounting tmp.mount... Dec 13 03:36:46.651076 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 03:36:46.651083 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:36:46.651089 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:36:46.651096 systemd[1]: Starting modprobe@configfs.service... Dec 13 03:36:46.651102 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:36:46.651108 systemd[1]: Starting modprobe@drm.service... Dec 13 03:36:46.651115 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:36:46.651121 systemd[1]: Starting modprobe@fuse.service... Dec 13 03:36:46.651128 kernel: fuse: init (API version 7.34) Dec 13 03:36:46.651134 systemd[1]: Starting modprobe@loop.service... Dec 13 03:36:46.651141 kernel: loop: module loaded Dec 13 03:36:46.651147 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 03:36:46.651153 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 03:36:46.651160 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 03:36:46.651166 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 03:36:46.651173 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 03:36:46.651180 systemd[1]: Stopped systemd-journald.service. Dec 13 03:36:46.651187 systemd[1]: Starting systemd-journald.service... Dec 13 03:36:46.651194 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:36:46.651202 systemd-journald[1251]: Journal started Dec 13 03:36:46.651227 systemd-journald[1251]: Runtime Journal (/run/log/journal/eb8d5b478caf40bfb5488f7cbd060d72) is 8.0M, max 640.1M, 632.1M free. Dec 13 03:36:43.048000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 03:36:43.350000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:36:43.352000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:36:43.352000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:36:43.352000 audit: BPF prog-id=10 op=LOAD Dec 13 03:36:43.352000 audit: BPF prog-id=10 op=UNLOAD Dec 13 03:36:43.352000 audit: BPF prog-id=11 op=LOAD Dec 13 03:36:43.352000 audit: BPF prog-id=11 op=UNLOAD Dec 13 03:36:43.423000 audit[1142]: AVC avc: denied { associate } for pid=1142 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 03:36:43.423000 audit[1142]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1125 pid=1142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:36:43.423000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:36:43.450000 audit[1142]: AVC avc: denied { associate } for pid=1142 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 03:36:43.450000 audit[1142]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=1125 pid=1142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:36:43.450000 audit: CWD cwd="/" Dec 13 03:36:43.450000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:43.450000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:43.450000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:36:44.965000 audit: BPF prog-id=12 op=LOAD Dec 13 03:36:44.965000 audit: BPF prog-id=3 op=UNLOAD Dec 13 03:36:45.010000 audit: BPF prog-id=13 op=LOAD Dec 13 03:36:45.055000 audit: BPF prog-id=14 op=LOAD Dec 13 03:36:45.055000 audit: BPF prog-id=4 op=UNLOAD Dec 13 03:36:45.055000 audit: BPF prog-id=5 op=UNLOAD Dec 13 03:36:45.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:45.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:45.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:45.365000 audit: BPF prog-id=12 op=UNLOAD Dec 13 03:36:46.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.623000 audit: BPF prog-id=15 op=LOAD Dec 13 03:36:46.623000 audit: BPF prog-id=16 op=LOAD Dec 13 03:36:46.623000 audit: BPF prog-id=17 op=LOAD Dec 13 03:36:46.623000 audit: BPF prog-id=13 op=UNLOAD Dec 13 03:36:46.623000 audit: BPF prog-id=14 op=UNLOAD Dec 13 03:36:46.648000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 03:36:46.648000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff29600f90 a2=4000 a3=7fff2960102c items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:36:46.648000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 03:36:44.964833 systemd[1]: Queued start job for default target multi-user.target. Dec 13 03:36:43.422164 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:36:45.056778 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 03:36:43.422720 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:36:43.422735 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:36:43.422758 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 03:36:43.422766 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 03:36:43.422786 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 03:36:43.422796 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 03:36:43.422933 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 03:36:43.422959 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:36:43.422968 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:36:43.423816 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 03:36:43.423841 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 03:36:43.423855 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 03:36:43.423866 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 03:36:43.423878 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 03:36:43.423887 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 03:36:44.616686 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:36:44.616829 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:36:44.616882 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:36:44.616978 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:36:44.617007 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 03:36:44.617041 /usr/lib/systemd/system-generators/torcx-generator[1142]: time="2024-12-13T03:36:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 03:36:46.682570 systemd[1]: Starting systemd-network-generator.service... Dec 13 03:36:46.704395 systemd[1]: Starting systemd-remount-fs.service... Dec 13 03:36:46.726404 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:36:46.758951 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 03:36:46.758972 systemd[1]: Stopped verity-setup.service. Dec 13 03:36:46.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.793400 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:36:46.808558 systemd[1]: Started systemd-journald.service. Dec 13 03:36:46.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.816913 systemd[1]: Mounted dev-hugepages.mount. Dec 13 03:36:46.824646 systemd[1]: Mounted dev-mqueue.mount. Dec 13 03:36:46.831635 systemd[1]: Mounted media.mount. Dec 13 03:36:46.838618 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 03:36:46.847610 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 03:36:46.856610 systemd[1]: Mounted tmp.mount. Dec 13 03:36:46.863664 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 03:36:46.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.871721 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:36:46.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.880705 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 03:36:46.880813 systemd[1]: Finished modprobe@configfs.service. Dec 13 03:36:46.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.890791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:36:46.890926 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:36:46.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.900909 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:36:46.901099 systemd[1]: Finished modprobe@drm.service. Dec 13 03:36:46.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.910189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:36:46.910514 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:36:46.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.919243 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 03:36:46.919576 systemd[1]: Finished modprobe@fuse.service. Dec 13 03:36:46.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.929217 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:36:46.929580 systemd[1]: Finished modprobe@loop.service. Dec 13 03:36:46.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.939262 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:36:46.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.948182 systemd[1]: Finished systemd-network-generator.service. Dec 13 03:36:46.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.957176 systemd[1]: Finished systemd-remount-fs.service. Dec 13 03:36:46.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.966166 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:36:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:46.975741 systemd[1]: Reached target network-pre.target. Dec 13 03:36:46.987185 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 03:36:46.997110 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 03:36:47.005569 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 03:36:47.006571 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 03:36:47.014051 systemd[1]: Starting systemd-journal-flush.service... Dec 13 03:36:47.017801 systemd-journald[1251]: Time spent on flushing to /var/log/journal/eb8d5b478caf40bfb5488f7cbd060d72 is 15.050ms for 1586 entries. Dec 13 03:36:47.017801 systemd-journald[1251]: System Journal (/var/log/journal/eb8d5b478caf40bfb5488f7cbd060d72) is 8.0M, max 195.6M, 187.6M free. Dec 13 03:36:47.052015 systemd-journald[1251]: Received client request to flush runtime journal. Dec 13 03:36:47.030457 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:36:47.031070 systemd[1]: Starting systemd-random-seed.service... Dec 13 03:36:47.041477 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:36:47.042011 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:36:47.048987 systemd[1]: Starting systemd-sysusers.service... Dec 13 03:36:47.055966 systemd[1]: Starting systemd-udev-settle.service... Dec 13 03:36:47.063558 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 03:36:47.072536 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 03:36:47.081587 systemd[1]: Finished systemd-journal-flush.service. Dec 13 03:36:47.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.089572 systemd[1]: Finished systemd-random-seed.service. Dec 13 03:36:47.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.097575 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:36:47.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.105564 systemd[1]: Finished systemd-sysusers.service. Dec 13 03:36:47.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.114514 systemd[1]: Reached target first-boot-complete.target. Dec 13 03:36:47.122688 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 03:36:47.309641 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 03:36:47.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.319000 audit: BPF prog-id=18 op=LOAD Dec 13 03:36:47.319000 audit: BPF prog-id=19 op=LOAD Dec 13 03:36:47.319000 audit: BPF prog-id=7 op=UNLOAD Dec 13 03:36:47.319000 audit: BPF prog-id=8 op=UNLOAD Dec 13 03:36:47.320690 systemd[1]: Starting systemd-udevd.service... Dec 13 03:36:47.332225 systemd-udevd[1268]: Using default interface naming scheme 'v252'. Dec 13 03:36:47.349349 systemd[1]: Started systemd-udevd.service. Dec 13 03:36:47.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.359792 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Dec 13 03:36:47.359000 audit: BPF prog-id=20 op=LOAD Dec 13 03:36:47.361024 systemd[1]: Starting systemd-networkd.service... Dec 13 03:36:47.381000 audit: BPF prog-id=21 op=LOAD Dec 13 03:36:47.396362 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 03:36:47.396420 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 03:36:47.396441 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1274) Dec 13 03:36:47.418196 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 03:36:47.431000 audit: BPF prog-id=22 op=LOAD Dec 13 03:36:47.431000 audit: BPF prog-id=23 op=LOAD Dec 13 03:36:47.433931 systemd[1]: Starting systemd-userdbd.service... Dec 13 03:36:47.449398 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 03:36:47.449451 kernel: ACPI: button: Power Button [PWRF] Dec 13 03:36:47.463360 kernel: IPMI message handler: version 39.2 Dec 13 03:36:47.400000 audit[1342]: AVC avc: denied { confidentiality } for pid=1342 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:36:47.493648 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:36:47.501520 systemd[1]: Started systemd-userdbd.service. Dec 13 03:36:47.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.520361 kernel: ipmi device interface Dec 13 03:36:47.552448 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 03:36:47.571912 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 03:36:47.572099 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Dec 13 03:36:47.400000 audit[1342]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559d48217eb0 a1=4d98c a2=7fd519d16bc5 a3=5 items=42 ppid=1268 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:36:47.400000 audit: CWD cwd="/" Dec 13 03:36:47.400000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=1 name=(null) inode=17555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=2 name=(null) inode=17555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=3 name=(null) inode=17556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=4 name=(null) inode=17555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=5 name=(null) inode=17557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=6 name=(null) inode=17555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=7 name=(null) inode=17558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=8 name=(null) inode=17558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=9 name=(null) inode=17559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=10 name=(null) inode=17558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=11 name=(null) inode=17560 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=12 name=(null) inode=17558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=13 name=(null) inode=17561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=14 name=(null) inode=17558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=15 name=(null) inode=17562 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=16 name=(null) inode=17558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=17 name=(null) inode=17563 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=18 name=(null) inode=17555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=19 name=(null) inode=17564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=20 name=(null) inode=17564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=21 name=(null) inode=17565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=22 name=(null) inode=17564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=23 name=(null) inode=17566 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=24 name=(null) inode=17564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=25 name=(null) inode=17567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=26 name=(null) inode=17564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=27 name=(null) inode=17568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=28 name=(null) inode=17564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=29 name=(null) inode=17569 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=30 name=(null) inode=17555 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=31 name=(null) inode=17570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=32 name=(null) inode=17570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=33 name=(null) inode=17571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=34 name=(null) inode=17570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=35 name=(null) inode=17572 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=36 name=(null) inode=17570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=37 name=(null) inode=17573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=38 name=(null) inode=17570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=39 name=(null) inode=17574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=40 name=(null) inode=17570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PATH item=41 name=(null) inode=17575 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:36:47.400000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 03:36:47.579360 kernel: ipmi_si: IPMI System Interface driver Dec 13 03:36:47.579389 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 03:36:47.610122 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 03:36:47.610230 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 03:36:47.643295 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 03:36:47.643310 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 03:36:47.643322 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 03:36:47.643334 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 03:36:47.801739 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 03:36:47.801850 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Dec 13 03:36:47.801933 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Dec 13 03:36:47.801992 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 03:36:47.802050 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 03:36:47.802061 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 03:36:47.806682 systemd-networkd[1307]: bond0: netdev ready Dec 13 03:36:47.808820 systemd-networkd[1307]: lo: Link UP Dec 13 03:36:47.808823 systemd-networkd[1307]: lo: Gained carrier Dec 13 03:36:47.809294 systemd-networkd[1307]: Enumeration completed Dec 13 03:36:47.809383 systemd[1]: Started systemd-networkd.service. Dec 13 03:36:47.809582 systemd-networkd[1307]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 03:36:47.817899 systemd-networkd[1307]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:d5:7d.network. Dec 13 03:36:47.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:47.859969 kernel: intel_rapl_common: Found RAPL domain package Dec 13 03:36:47.859999 kernel: intel_rapl_common: Found RAPL domain core Dec 13 03:36:47.860012 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 03:36:47.860094 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 03:36:47.910357 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Dec 13 03:36:48.037357 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 03:36:48.055359 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 03:36:48.057594 systemd[1]: Finished systemd-udev-settle.service. Dec 13 03:36:48.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.066124 systemd[1]: Starting lvm2-activation-early.service... Dec 13 03:36:48.081139 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:36:48.123759 systemd[1]: Finished lvm2-activation-early.service. Dec 13 03:36:48.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.131490 systemd[1]: Reached target cryptsetup.target. Dec 13 03:36:48.140032 systemd[1]: Starting lvm2-activation.service... Dec 13 03:36:48.142158 lvm[1372]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:36:48.177786 systemd[1]: Finished lvm2-activation.service. Dec 13 03:36:48.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.185559 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:36:48.193458 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 03:36:48.193473 systemd[1]: Reached target local-fs.target. Dec 13 03:36:48.201444 systemd[1]: Reached target machines.target. Dec 13 03:36:48.210036 systemd[1]: Starting ldconfig.service... Dec 13 03:36:48.216969 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:36:48.216990 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:36:48.217586 systemd[1]: Starting systemd-boot-update.service... Dec 13 03:36:48.224881 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 03:36:48.235129 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 03:36:48.236050 systemd[1]: Starting systemd-sysext.service... Dec 13 03:36:48.236237 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1374 (bootctl) Dec 13 03:36:48.236856 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 03:36:48.249188 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 03:36:48.257007 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 03:36:48.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.263342 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 03:36:48.263444 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 03:36:48.298356 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 03:36:48.381436 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 03:36:48.386924 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 03:36:48.388443 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 03:36:48.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.414366 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Dec 13 03:36:48.414444 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 03:36:48.415112 systemd-networkd[1307]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:d5:7c.network. Dec 13 03:36:48.447106 systemd-fsck[1385]: fsck.fat 4.2 (2021-01-31) Dec 13 03:36:48.447106 systemd-fsck[1385]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 03:36:48.448085 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 03:36:48.455357 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:36:48.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.466268 systemd[1]: Mounting boot.mount... Dec 13 03:36:48.483101 systemd[1]: Mounted boot.mount. Dec 13 03:36:48.511383 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 03:36:48.541338 (sd-sysext)[1387]: Using extensions 'kubernetes'. Dec 13 03:36:48.541572 (sd-sysext)[1387]: Merged extensions into '/usr'. Dec 13 03:36:48.542249 systemd[1]: Finished systemd-boot-update.service. Dec 13 03:36:48.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.563042 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:36:48.563737 systemd[1]: Mounting usr-share-oem.mount... Dec 13 03:36:48.565359 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 03:36:48.565466 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:36:48.586364 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Dec 13 03:36:48.601575 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:36:48.602252 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:36:48.606357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 03:36:48.620715 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:36:48.626053 systemd-networkd[1307]: bond0: Link UP Dec 13 03:36:48.626262 systemd-networkd[1307]: enp1s0f1np1: Link UP Dec 13 03:36:48.626402 systemd-networkd[1307]: enp1s0f1np1: Gained carrier Dec 13 03:36:48.627496 systemd-networkd[1307]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:d5:7c.network. Dec 13 03:36:48.631907 systemd[1]: Starting modprobe@loop.service... Dec 13 03:36:48.638457 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:36:48.638540 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:36:48.638621 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:36:48.640402 systemd[1]: Mounted usr-share-oem.mount. Dec 13 03:36:48.647580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:36:48.647646 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:36:48.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.656610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:36:48.656671 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:36:48.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.664643 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:36:48.664702 systemd[1]: Finished modprobe@loop.service. Dec 13 03:36:48.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.679702 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:36:48.679790 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:36:48.680379 systemd[1]: Finished systemd-sysext.service. Dec 13 03:36:48.685200 ldconfig[1373]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 03:36:48.685414 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.700563 systemd[1]: Finished ldconfig.service. Dec 13 03:36:48.705393 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.720045 systemd[1]: Starting ensure-sysext.service... Dec 13 03:36:48.726356 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.742939 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 03:36:48.746359 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.754552 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 03:36:48.758303 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 03:36:48.762486 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 03:36:48.762796 systemd[1]: Reloading. Dec 13 03:36:48.766354 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.786368 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.787518 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-12-13T03:36:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:36:48.787538 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-12-13T03:36:48Z" level=info msg="torcx already run" Dec 13 03:36:48.805362 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.824398 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.842362 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.844006 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:36:48.844015 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:36:48.855095 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:36:48.861397 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.879394 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.897363 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.896000 audit: BPF prog-id=24 op=LOAD Dec 13 03:36:48.896000 audit: BPF prog-id=25 op=LOAD Dec 13 03:36:48.896000 audit: BPF prog-id=18 op=UNLOAD Dec 13 03:36:48.896000 audit: BPF prog-id=19 op=UNLOAD Dec 13 03:36:48.914357 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.913000 audit: BPF prog-id=26 op=LOAD Dec 13 03:36:48.913000 audit: BPF prog-id=15 op=UNLOAD Dec 13 03:36:48.913000 audit: BPF prog-id=27 op=LOAD Dec 13 03:36:48.913000 audit: BPF prog-id=28 op=LOAD Dec 13 03:36:48.913000 audit: BPF prog-id=16 op=UNLOAD Dec 13 03:36:48.913000 audit: BPF prog-id=17 op=UNLOAD Dec 13 03:36:48.913000 audit: BPF prog-id=29 op=LOAD Dec 13 03:36:48.913000 audit: BPF prog-id=20 op=UNLOAD Dec 13 03:36:48.930000 audit: BPF prog-id=30 op=LOAD Dec 13 03:36:48.930000 audit: BPF prog-id=21 op=UNLOAD Dec 13 03:36:48.930000 audit: BPF prog-id=31 op=LOAD Dec 13 03:36:48.931355 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.930000 audit: BPF prog-id=32 op=LOAD Dec 13 03:36:48.930000 audit: BPF prog-id=22 op=UNLOAD Dec 13 03:36:48.930000 audit: BPF prog-id=23 op=UNLOAD Dec 13 03:36:48.932264 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 03:36:48.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:36:48.949357 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.950144 systemd-networkd[1307]: enp1s0f0np0: Link UP Dec 13 03:36:48.950320 systemd-networkd[1307]: bond0: Gained carrier Dec 13 03:36:48.950415 systemd-networkd[1307]: enp1s0f0np0: Gained carrier Dec 13 03:36:48.950461 systemd[1]: Starting audit-rules.service... Dec 13 03:36:48.966391 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Dec 13 03:36:48.966422 kernel: bond0: (slave enp1s0f1np1): link status definitely down, disabling slave Dec 13 03:36:48.966435 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:36:48.998063 systemd[1]: Starting clean-ca-certificates.service... Dec 13 03:36:49.000000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 03:36:49.000000 audit[1491]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8185e770 a2=420 a3=0 items=0 ppid=1475 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:36:49.000000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 03:36:49.002298 augenrules[1491]: No rules Dec 13 03:36:49.017362 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Dec 13 03:36:49.017387 kernel: bond0: active interface up! Dec 13 03:36:49.029068 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 03:36:49.039399 systemd[1]: Starting systemd-resolved.service... Dec 13 03:36:49.042638 systemd-networkd[1307]: enp1s0f1np1: Link DOWN Dec 13 03:36:49.042641 systemd-networkd[1307]: enp1s0f1np1: Lost carrier Dec 13 03:36:49.047486 systemd[1]: Starting systemd-timesyncd.service... Dec 13 03:36:49.054964 systemd[1]: Starting systemd-update-utmp.service... Dec 13 03:36:49.061704 systemd[1]: Finished audit-rules.service. Dec 13 03:36:49.068569 systemd[1]: Finished clean-ca-certificates.service. Dec 13 03:36:49.076545 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 03:36:49.089592 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.090311 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:36:49.097995 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:36:49.104980 systemd[1]: Starting modprobe@loop.service... Dec 13 03:36:49.111467 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.111553 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:36:49.112260 systemd[1]: Starting systemd-update-done.service... Dec 13 03:36:49.119004 systemd-resolved[1497]: Positive Trust Anchors: Dec 13 03:36:49.119011 systemd-resolved[1497]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:36:49.119030 systemd-resolved[1497]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:36:49.119404 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:36:49.120142 systemd[1]: Started systemd-timesyncd.service. Dec 13 03:36:49.123157 systemd-resolved[1497]: Using system hostname 'ci-3510.3.6-a-ab200a80e9'. Dec 13 03:36:49.128783 systemd[1]: Finished systemd-update-utmp.service. Dec 13 03:36:49.137654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:36:49.137718 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:36:49.145635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:36:49.145698 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:36:49.153647 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:36:49.153707 systemd[1]: Finished modprobe@loop.service. Dec 13 03:36:49.160111 systemd[1]: Finished systemd-update-done.service. Dec 13 03:36:49.169742 systemd[1]: Reached target time-set.target. Dec 13 03:36:49.177581 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.178276 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:36:49.190981 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:36:49.194407 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 03:36:49.197864 systemd-networkd[1307]: enp1s0f1np1: Link UP Dec 13 03:36:49.198025 systemd-networkd[1307]: enp1s0f1np1: Gained carrier Dec 13 03:36:49.200978 systemd[1]: Starting modprobe@loop.service... Dec 13 03:36:49.207479 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.207562 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:36:49.207635 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:36:49.208132 systemd[1]: Started systemd-resolved.service. Dec 13 03:36:49.216709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:36:49.216770 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:36:49.224645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:36:49.224704 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:36:49.232635 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:36:49.232694 systemd[1]: Finished modprobe@loop.service. Dec 13 03:36:49.250026 systemd[1]: Reached target network.target. Dec 13 03:36:49.252383 kernel: bond0: (slave enp1s0f1np1): link status up, enabling it in 200 ms Dec 13 03:36:49.252409 kernel: bond0: (slave enp1s0f1np1): invalid new link 3 on slave Dec 13 03:36:49.274516 systemd[1]: Reached target nss-lookup.target. Dec 13 03:36:49.282595 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.283178 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:36:49.290933 systemd[1]: Starting modprobe@drm.service... Dec 13 03:36:49.297935 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:36:49.304929 systemd[1]: Starting modprobe@loop.service... Dec 13 03:36:49.311467 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.311546 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:36:49.312139 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 03:36:49.320441 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:36:49.322132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:36:49.322192 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:36:49.330641 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:36:49.330699 systemd[1]: Finished modprobe@drm.service. Dec 13 03:36:49.338626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:36:49.338683 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:36:49.346625 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:36:49.346681 systemd[1]: Finished modprobe@loop.service. Dec 13 03:36:49.354708 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:36:49.354771 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:36:49.354792 systemd[1]: Reached target sysinit.target. Dec 13 03:36:49.363461 systemd[1]: Started motdgen.path. Dec 13 03:36:49.370438 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 03:36:49.380491 systemd[1]: Started logrotate.timer. Dec 13 03:36:49.387465 systemd[1]: Started mdadm.timer. Dec 13 03:36:49.394424 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 03:36:49.402426 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:36:49.402439 systemd[1]: Reached target paths.target. Dec 13 03:36:49.409423 systemd[1]: Reached target timers.target. Dec 13 03:36:49.416547 systemd[1]: Listening on dbus.socket. Dec 13 03:36:49.425938 systemd[1]: Starting docker.socket... Dec 13 03:36:49.433762 systemd[1]: Listening on sshd.socket. Dec 13 03:36:49.440488 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:36:49.440512 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.440524 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:36:49.440812 systemd[1]: Finished ensure-sysext.service. Dec 13 03:36:49.449525 systemd[1]: Listening on docker.socket. Dec 13 03:36:49.456858 systemd[1]: Reached target sockets.target. Dec 13 03:36:49.473431 systemd[1]: Reached target basic.target. Dec 13 03:36:49.477354 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 03:36:49.483459 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.483473 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:36:49.483921 systemd[1]: Starting containerd.service... Dec 13 03:36:49.490854 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 03:36:49.499929 systemd[1]: Starting coreos-metadata.service... Dec 13 03:36:49.506929 systemd[1]: Starting dbus.service... Dec 13 03:36:49.513075 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 03:36:49.518151 jq[1523]: false Dec 13 03:36:49.520056 systemd[1]: Starting extend-filesystems.service... Dec 13 03:36:49.521222 coreos-metadata[1516]: Dec 13 03:36:49.521 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:36:49.527462 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 03:36:49.527639 dbus-daemon[1522]: [system] SELinux support is enabled Dec 13 03:36:49.528076 systemd[1]: Starting motdgen.service... Dec 13 03:36:49.528333 extend-filesystems[1524]: Found loop1 Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda1 Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda2 Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda3 Dec 13 03:36:49.548547 extend-filesystems[1524]: Found usr Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda4 Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda6 Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda7 Dec 13 03:36:49.548547 extend-filesystems[1524]: Found sda9 Dec 13 03:36:49.548547 extend-filesystems[1524]: Checking size of /dev/sda9 Dec 13 03:36:49.548547 extend-filesystems[1524]: Resized partition /dev/sda9 Dec 13 03:36:49.672392 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Dec 13 03:36:49.672442 coreos-metadata[1519]: Dec 13 03:36:49.531 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:36:49.535185 systemd[1]: Starting prepare-helm.service... Dec 13 03:36:49.672605 extend-filesystems[1540]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 03:36:49.565214 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 03:36:49.584047 systemd[1]: Starting sshd-keygen.service... Dec 13 03:36:49.602967 systemd[1]: Starting systemd-logind.service... Dec 13 03:36:49.619429 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:36:49.620016 systemd[1]: Starting tcsd.service... Dec 13 03:36:49.687803 update_engine[1553]: I1213 03:36:49.682519 1553 main.cc:92] Flatcar Update Engine starting Dec 13 03:36:49.687803 update_engine[1553]: I1213 03:36:49.685914 1553 update_check_scheduler.cc:74] Next update check in 6m1s Dec 13 03:36:49.624619 systemd-logind[1551]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 03:36:49.688055 jq[1554]: true Dec 13 03:36:49.624629 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 03:36:49.624639 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 03:36:49.624793 systemd-logind[1551]: New seat seat0. Dec 13 03:36:49.632809 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 03:36:49.633236 systemd[1]: Starting update-engine.service... Dec 13 03:36:49.647954 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 03:36:49.664811 systemd[1]: Started dbus.service. Dec 13 03:36:49.681212 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 03:36:49.681297 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 03:36:49.681455 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 03:36:49.681527 systemd[1]: Finished motdgen.service. Dec 13 03:36:49.694995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 03:36:49.695079 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 03:36:49.706216 jq[1558]: true Dec 13 03:36:49.706493 dbus-daemon[1522]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 03:36:49.707597 tar[1556]: linux-amd64/helm Dec 13 03:36:49.712072 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 03:36:49.712202 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 03:36:49.712299 systemd[1]: Started systemd-logind.service. Dec 13 03:36:49.715621 env[1559]: time="2024-12-13T03:36:49.715597886Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 03:36:49.724035 env[1559]: time="2024-12-13T03:36:49.723989722Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 03:36:49.724398 env[1559]: time="2024-12-13T03:36:49.724375648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:36:49.725028 env[1559]: time="2024-12-13T03:36:49.724971219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:36:49.725028 env[1559]: time="2024-12-13T03:36:49.724985374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:36:49.726756 systemd[1]: Started update-engine.service. Dec 13 03:36:49.726848 env[1559]: time="2024-12-13T03:36:49.726754682Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:36:49.726848 env[1559]: time="2024-12-13T03:36:49.726766240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 03:36:49.726848 env[1559]: time="2024-12-13T03:36:49.726773802Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 03:36:49.726848 env[1559]: time="2024-12-13T03:36:49.726779031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 03:36:49.726848 env[1559]: time="2024-12-13T03:36:49.726832107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:36:49.727012 env[1559]: time="2024-12-13T03:36:49.726965223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:36:49.727088 env[1559]: time="2024-12-13T03:36:49.727042308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:36:49.727088 env[1559]: time="2024-12-13T03:36:49.727052423Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 03:36:49.728923 env[1559]: time="2024-12-13T03:36:49.728881547Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 03:36:49.728923 env[1559]: time="2024-12-13T03:36:49.728891905Z" level=info msg="metadata content store policy set" policy=shared Dec 13 03:36:49.736276 systemd[1]: Started locksmithd.service. Dec 13 03:36:49.737941 bash[1586]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:36:49.739962 env[1559]: time="2024-12-13T03:36:49.739934792Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 03:36:49.739962 env[1559]: time="2024-12-13T03:36:49.739953812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 03:36:49.740025 env[1559]: time="2024-12-13T03:36:49.739964494Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 03:36:49.740025 env[1559]: time="2024-12-13T03:36:49.739986988Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740025 env[1559]: time="2024-12-13T03:36:49.740002487Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740025 env[1559]: time="2024-12-13T03:36:49.740013498Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740025 env[1559]: time="2024-12-13T03:36:49.740022722Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740104 env[1559]: time="2024-12-13T03:36:49.740032181Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740104 env[1559]: time="2024-12-13T03:36:49.740042596Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740104 env[1559]: time="2024-12-13T03:36:49.740052325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740104 env[1559]: time="2024-12-13T03:36:49.740061309Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740104 env[1559]: time="2024-12-13T03:36:49.740071144Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 03:36:49.740176 env[1559]: time="2024-12-13T03:36:49.740129421Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 03:36:49.740193 env[1559]: time="2024-12-13T03:36:49.740173855Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 03:36:49.740322 env[1559]: time="2024-12-13T03:36:49.740314664Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 03:36:49.740344 env[1559]: time="2024-12-13T03:36:49.740330552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740344 env[1559]: time="2024-12-13T03:36:49.740339121Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 03:36:49.740386 env[1559]: time="2024-12-13T03:36:49.740370862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740386 env[1559]: time="2024-12-13T03:36:49.740379566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740417 env[1559]: time="2024-12-13T03:36:49.740386647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740417 env[1559]: time="2024-12-13T03:36:49.740393975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740417 env[1559]: time="2024-12-13T03:36:49.740409819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740482 env[1559]: time="2024-12-13T03:36:49.740417569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740482 env[1559]: time="2024-12-13T03:36:49.740423980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740482 env[1559]: time="2024-12-13T03:36:49.740429974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740482 env[1559]: time="2024-12-13T03:36:49.740437124Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 03:36:49.740546 env[1559]: time="2024-12-13T03:36:49.740498232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740546 env[1559]: time="2024-12-13T03:36:49.740507376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740546 env[1559]: time="2024-12-13T03:36:49.740513882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740546 env[1559]: time="2024-12-13T03:36:49.740519803Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 03:36:49.740546 env[1559]: time="2024-12-13T03:36:49.740527383Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 03:36:49.740546 env[1559]: time="2024-12-13T03:36:49.740533150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 03:36:49.740546 env[1559]: time="2024-12-13T03:36:49.740543458Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 03:36:49.740648 env[1559]: time="2024-12-13T03:36:49.740564599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 03:36:49.740709 env[1559]: time="2024-12-13T03:36:49.740683049Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.740717545Z" level=info msg="Connect containerd service" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.740737182Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741002341Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741125102Z" level=info msg="Start subscribing containerd event" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741143838Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741174364Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741175697Z" level=info msg="Start recovering state" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741197230Z" level=info msg="containerd successfully booted in 0.025927s" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741215727Z" level=info msg="Start event monitor" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741236440Z" level=info msg="Start snapshots syncer" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741248184Z" level=info msg="Start cni network conf syncer for default" Dec 13 03:36:49.742230 env[1559]: time="2024-12-13T03:36:49.741254486Z" level=info msg="Start streaming server" Dec 13 03:36:49.743505 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 03:36:49.743627 systemd[1]: Reached target system-config.target. Dec 13 03:36:49.751452 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 03:36:49.751551 systemd[1]: Reached target user-config.target. Dec 13 03:36:49.761015 systemd[1]: Started containerd.service. Dec 13 03:36:49.767607 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 03:36:49.796858 locksmithd[1593]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 03:36:49.967295 tar[1556]: linux-amd64/LICENSE Dec 13 03:36:49.967369 tar[1556]: linux-amd64/README.md Dec 13 03:36:49.969912 systemd[1]: Finished prepare-helm.service. Dec 13 03:36:50.048381 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Dec 13 03:36:50.077527 extend-filesystems[1540]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 03:36:50.077527 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 03:36:50.077527 extend-filesystems[1540]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Dec 13 03:36:50.117443 extend-filesystems[1524]: Resized filesystem in /dev/sda9 Dec 13 03:36:50.117443 extend-filesystems[1524]: Found sdb Dec 13 03:36:50.078024 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 03:36:50.078101 systemd[1]: Finished extend-filesystems.service. Dec 13 03:36:50.289456 systemd-networkd[1307]: bond0: Gained IPv6LL Dec 13 03:36:50.334182 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 03:36:50.346124 systemd[1]: Finished sshd-keygen.service. Dec 13 03:36:50.354419 systemd[1]: Starting issuegen.service... Dec 13 03:36:50.361680 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 03:36:50.361781 systemd[1]: Finished issuegen.service. Dec 13 03:36:50.369186 systemd[1]: Starting systemd-user-sessions.service... Dec 13 03:36:50.377662 systemd[1]: Finished systemd-user-sessions.service. Dec 13 03:36:50.386082 systemd[1]: Started getty@tty1.service. Dec 13 03:36:50.393035 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 03:36:50.401566 systemd[1]: Reached target getty.target. Dec 13 03:36:50.546631 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 03:36:50.556672 systemd[1]: Reached target network-online.target. Dec 13 03:36:50.565225 systemd[1]: Starting kubelet.service... Dec 13 03:36:51.208430 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 03:36:51.259503 systemd[1]: Started kubelet.service. Dec 13 03:36:51.800899 kubelet[1623]: E1213 03:36:51.800846 1623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:36:51.802093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:36:51.802161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:36:54.750121 coreos-metadata[1516]: Dec 13 03:36:54.750 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Dec 13 03:36:54.750935 coreos-metadata[1519]: Dec 13 03:36:54.750 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Dec 13 03:36:55.417056 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 03:36:55.422536 login[1616]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 03:36:55.424846 systemd-logind[1551]: New session 1 of user core. Dec 13 03:36:55.425344 systemd[1]: Created slice user-500.slice. Dec 13 03:36:55.425958 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 03:36:55.427117 systemd-logind[1551]: New session 2 of user core. Dec 13 03:36:55.431618 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 03:36:55.432280 systemd[1]: Starting user@500.service... Dec 13 03:36:55.434223 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:36:55.516781 systemd[1645]: Queued start job for default target default.target. Dec 13 03:36:55.517025 systemd[1645]: Reached target paths.target. Dec 13 03:36:55.517036 systemd[1645]: Reached target sockets.target. Dec 13 03:36:55.517044 systemd[1645]: Reached target timers.target. Dec 13 03:36:55.517051 systemd[1645]: Reached target basic.target. Dec 13 03:36:55.517070 systemd[1645]: Reached target default.target. Dec 13 03:36:55.517085 systemd[1645]: Startup finished in 79ms. Dec 13 03:36:55.517135 systemd[1]: Started user@500.service. Dec 13 03:36:55.517666 systemd[1]: Started session-1.scope. Dec 13 03:36:55.518010 systemd[1]: Started session-2.scope. Dec 13 03:36:55.750598 coreos-metadata[1516]: Dec 13 03:36:55.750 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 03:36:55.751597 coreos-metadata[1519]: Dec 13 03:36:55.750 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 03:36:55.755713 coreos-metadata[1519]: Dec 13 03:36:55.755 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Dec 13 03:36:55.756423 coreos-metadata[1516]: Dec 13 03:36:55.756 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Dec 13 03:36:56.767756 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Dec 13 03:36:56.767909 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Dec 13 03:36:57.387821 systemd[1]: Created slice system-sshd.slice. Dec 13 03:36:57.388529 systemd[1]: Started sshd@0-147.75.202.71:22-139.178.68.195:53242.service. Dec 13 03:36:57.445765 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 53242 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:36:57.446459 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:36:57.448871 systemd-logind[1551]: New session 3 of user core. Dec 13 03:36:57.449330 systemd[1]: Started session-3.scope. Dec 13 03:36:57.499935 systemd[1]: Started sshd@1-147.75.202.71:22-139.178.68.195:53258.service. Dec 13 03:36:57.536201 sshd[1671]: Accepted publickey for core from 139.178.68.195 port 53258 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:36:57.536933 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:36:57.539417 systemd-logind[1551]: New session 4 of user core. Dec 13 03:36:57.539882 systemd[1]: Started session-4.scope. Dec 13 03:36:57.593936 sshd[1671]: pam_unix(sshd:session): session closed for user core Dec 13 03:36:57.595600 systemd[1]: sshd@1-147.75.202.71:22-139.178.68.195:53258.service: Deactivated successfully. Dec 13 03:36:57.595931 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 03:36:57.596243 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Dec 13 03:36:57.596812 systemd[1]: Started sshd@2-147.75.202.71:22-139.178.68.195:53272.service. Dec 13 03:36:57.597277 systemd-logind[1551]: Removed session 4. Dec 13 03:36:57.633789 sshd[1677]: Accepted publickey for core from 139.178.68.195 port 53272 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:36:57.635153 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:36:57.639174 systemd-logind[1551]: New session 5 of user core. Dec 13 03:36:57.640122 systemd[1]: Started session-5.scope. Dec 13 03:36:57.697947 sshd[1677]: pam_unix(sshd:session): session closed for user core Dec 13 03:36:57.699029 systemd[1]: sshd@2-147.75.202.71:22-139.178.68.195:53272.service: Deactivated successfully. Dec 13 03:36:57.699430 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 03:36:57.699822 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Dec 13 03:36:57.700296 systemd-logind[1551]: Removed session 5. Dec 13 03:36:57.756073 coreos-metadata[1519]: Dec 13 03:36:57.755 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Dec 13 03:36:57.756551 coreos-metadata[1516]: Dec 13 03:36:57.756 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Dec 13 03:36:58.681793 coreos-metadata[1516]: Dec 13 03:36:58.681 INFO Fetch successful Dec 13 03:36:58.771862 unknown[1516]: wrote ssh authorized keys file for user: core Dec 13 03:36:58.783267 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:36:58.783496 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 03:36:58.930930 coreos-metadata[1519]: Dec 13 03:36:58.930 INFO Fetch successful Dec 13 03:36:58.964268 systemd[1]: Finished coreos-metadata.service. Dec 13 03:36:58.965063 systemd[1]: Started packet-phone-home.service. Dec 13 03:36:58.965175 systemd[1]: Reached target multi-user.target. Dec 13 03:36:58.965798 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 03:36:58.969809 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 03:36:58.969890 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 03:36:58.969980 systemd[1]: Startup finished in 1.860s (kernel) + 23.876s (initrd) + 16.250s (userspace) = 41.988s. Dec 13 03:36:58.970583 curl[1685]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 03:36:58.970748 curl[1685]: Dload Upload Total Spent Left Speed Dec 13 03:36:59.298059 curl[1685]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 03:36:59.300546 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 03:37:01.997929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 03:37:01.998496 systemd[1]: Stopped kubelet.service. Dec 13 03:37:02.001615 systemd[1]: Starting kubelet.service... Dec 13 03:37:02.172467 systemd[1]: Started kubelet.service. Dec 13 03:37:02.229376 kubelet[1691]: E1213 03:37:02.229354 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:37:02.231448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:37:02.231530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:37:07.707753 systemd[1]: Started sshd@3-147.75.202.71:22-139.178.68.195:53102.service. Dec 13 03:37:07.744799 sshd[1710]: Accepted publickey for core from 139.178.68.195 port 53102 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:37:07.745532 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:37:07.747976 systemd-logind[1551]: New session 6 of user core. Dec 13 03:37:07.748411 systemd[1]: Started session-6.scope. Dec 13 03:37:07.800552 sshd[1710]: pam_unix(sshd:session): session closed for user core Dec 13 03:37:07.802058 systemd[1]: sshd@3-147.75.202.71:22-139.178.68.195:53102.service: Deactivated successfully. Dec 13 03:37:07.802364 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 03:37:07.802714 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Dec 13 03:37:07.803219 systemd[1]: Started sshd@4-147.75.202.71:22-139.178.68.195:53114.service. Dec 13 03:37:07.803713 systemd-logind[1551]: Removed session 6. Dec 13 03:37:07.840635 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 53114 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:37:07.841637 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:37:07.844859 systemd-logind[1551]: New session 7 of user core. Dec 13 03:37:07.845533 systemd[1]: Started session-7.scope. Dec 13 03:37:07.896970 sshd[1716]: pam_unix(sshd:session): session closed for user core Dec 13 03:37:07.898632 systemd[1]: sshd@4-147.75.202.71:22-139.178.68.195:53114.service: Deactivated successfully. Dec 13 03:37:07.898936 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 03:37:07.899260 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Dec 13 03:37:07.899833 systemd[1]: Started sshd@5-147.75.202.71:22-139.178.68.195:53118.service. Dec 13 03:37:07.900276 systemd-logind[1551]: Removed session 7. Dec 13 03:37:07.936752 sshd[1722]: Accepted publickey for core from 139.178.68.195 port 53118 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:37:07.937679 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:37:07.940752 systemd-logind[1551]: New session 8 of user core. Dec 13 03:37:07.941350 systemd[1]: Started session-8.scope. Dec 13 03:37:08.006900 sshd[1722]: pam_unix(sshd:session): session closed for user core Dec 13 03:37:08.013880 systemd[1]: sshd@5-147.75.202.71:22-139.178.68.195:53118.service: Deactivated successfully. Dec 13 03:37:08.015517 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 03:37:08.017314 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Dec 13 03:37:08.019963 systemd[1]: Started sshd@6-147.75.202.71:22-139.178.68.195:53124.service. Dec 13 03:37:08.022252 systemd-logind[1551]: Removed session 8. Dec 13 03:37:08.101567 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 53124 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:37:08.102738 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:37:08.106493 systemd-logind[1551]: New session 9 of user core. Dec 13 03:37:08.107325 systemd[1]: Started session-9.scope. Dec 13 03:37:08.188863 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 03:37:08.189547 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 03:37:08.213988 systemd[1]: Starting docker.service... Dec 13 03:37:08.233309 env[1746]: time="2024-12-13T03:37:08.233281978Z" level=info msg="Starting up" Dec 13 03:37:08.233900 env[1746]: time="2024-12-13T03:37:08.233890287Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 03:37:08.233900 env[1746]: time="2024-12-13T03:37:08.233898886Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 03:37:08.233946 env[1746]: time="2024-12-13T03:37:08.233910649Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 03:37:08.233946 env[1746]: time="2024-12-13T03:37:08.233916676Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 03:37:08.235081 env[1746]: time="2024-12-13T03:37:08.235037506Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 03:37:08.235081 env[1746]: time="2024-12-13T03:37:08.235048423Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 03:37:08.235081 env[1746]: time="2024-12-13T03:37:08.235057665Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 03:37:08.235081 env[1746]: time="2024-12-13T03:37:08.235063215Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 03:37:08.248600 env[1746]: time="2024-12-13T03:37:08.248556370Z" level=info msg="Loading containers: start." Dec 13 03:37:08.439385 kernel: Initializing XFRM netlink socket Dec 13 03:37:08.488829 env[1746]: time="2024-12-13T03:37:08.488800600Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 03:37:08.489635 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. Dec 13 03:37:08.536464 systemd-networkd[1307]: docker0: Link UP Dec 13 03:37:08.552797 env[1746]: time="2024-12-13T03:37:08.552746031Z" level=info msg="Loading containers: done." Dec 13 03:37:08.562461 env[1746]: time="2024-12-13T03:37:08.562413963Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 03:37:08.562702 env[1746]: time="2024-12-13T03:37:08.562649640Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 03:37:08.562818 env[1746]: time="2024-12-13T03:37:08.562773552Z" level=info msg="Daemon has completed initialization" Dec 13 03:37:08.564405 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4040605385-merged.mount: Deactivated successfully. Dec 13 03:37:08.582676 systemd[1]: Started docker.service. Dec 13 03:37:08.598288 env[1746]: time="2024-12-13T03:37:08.598156643Z" level=info msg="API listen on /run/docker.sock" Dec 13 03:37:08.693621 systemd-timesyncd[1498]: Contacted time server [2604:a880:1:20::1fd:1001]:123 (2.flatcar.pool.ntp.org). Dec 13 03:37:08.693681 systemd-timesyncd[1498]: Initial clock synchronization to Fri 2024-12-13 03:37:08.511726 UTC. Dec 13 03:37:09.848115 env[1559]: time="2024-12-13T03:37:09.847973558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 03:37:10.564286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1324765505.mount: Deactivated successfully. Dec 13 03:37:11.719003 env[1559]: time="2024-12-13T03:37:11.718943446Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:11.719989 env[1559]: time="2024-12-13T03:37:11.719974312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:11.721004 env[1559]: time="2024-12-13T03:37:11.720992844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:11.722014 env[1559]: time="2024-12-13T03:37:11.721985221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:11.722496 env[1559]: time="2024-12-13T03:37:11.722482191Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 03:37:11.728001 env[1559]: time="2024-12-13T03:37:11.727921841Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 03:37:12.247600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 03:37:12.248100 systemd[1]: Stopped kubelet.service. Dec 13 03:37:12.251414 systemd[1]: Starting kubelet.service... Dec 13 03:37:12.449763 systemd[1]: Started kubelet.service. Dec 13 03:37:12.478691 kubelet[1923]: E1213 03:37:12.478608 1923 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:37:12.479749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:37:12.479821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:37:13.179388 env[1559]: time="2024-12-13T03:37:13.179301486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:13.180219 env[1559]: time="2024-12-13T03:37:13.180193929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:13.181637 env[1559]: time="2024-12-13T03:37:13.181597442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:13.182560 env[1559]: time="2024-12-13T03:37:13.182514723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:13.183389 env[1559]: time="2024-12-13T03:37:13.183333359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 03:37:13.191913 env[1559]: time="2024-12-13T03:37:13.191860764Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 03:37:14.260966 env[1559]: time="2024-12-13T03:37:14.260937032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:14.261825 env[1559]: time="2024-12-13T03:37:14.261811188Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:14.262857 env[1559]: time="2024-12-13T03:37:14.262845175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:14.264297 env[1559]: time="2024-12-13T03:37:14.264256417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:14.264688 env[1559]: time="2024-12-13T03:37:14.264632534Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 03:37:14.270984 env[1559]: time="2024-12-13T03:37:14.270907062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 03:37:15.191495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753249319.mount: Deactivated successfully. Dec 13 03:37:15.544209 env[1559]: time="2024-12-13T03:37:15.544187404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:15.544709 env[1559]: time="2024-12-13T03:37:15.544697572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:15.545524 env[1559]: time="2024-12-13T03:37:15.545510979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:15.545995 env[1559]: time="2024-12-13T03:37:15.545980212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:15.546262 env[1559]: time="2024-12-13T03:37:15.546249427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 03:37:15.551754 env[1559]: time="2024-12-13T03:37:15.551718082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 03:37:16.186699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468447402.mount: Deactivated successfully. Dec 13 03:37:16.896483 env[1559]: time="2024-12-13T03:37:16.896426540Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:16.897142 env[1559]: time="2024-12-13T03:37:16.897087076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:16.898143 env[1559]: time="2024-12-13T03:37:16.898101273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:16.899505 env[1559]: time="2024-12-13T03:37:16.899483032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:16.900407 env[1559]: time="2024-12-13T03:37:16.900344907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 03:37:16.906093 env[1559]: time="2024-12-13T03:37:16.906048920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 03:37:17.437373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692399836.mount: Deactivated successfully. Dec 13 03:37:17.438727 env[1559]: time="2024-12-13T03:37:17.438708875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:17.439328 env[1559]: time="2024-12-13T03:37:17.439315277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:17.440110 env[1559]: time="2024-12-13T03:37:17.440065031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:17.441137 env[1559]: time="2024-12-13T03:37:17.441123374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:17.441456 env[1559]: time="2024-12-13T03:37:17.441414801Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 03:37:17.446680 env[1559]: time="2024-12-13T03:37:17.446664330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 03:37:18.030586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053790660.mount: Deactivated successfully. Dec 13 03:37:19.909123 env[1559]: time="2024-12-13T03:37:19.909069780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:19.909728 env[1559]: time="2024-12-13T03:37:19.909681527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:19.911218 env[1559]: time="2024-12-13T03:37:19.911177444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:19.912053 env[1559]: time="2024-12-13T03:37:19.912004925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:19.912548 env[1559]: time="2024-12-13T03:37:19.912504903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 03:37:21.809209 systemd[1]: Stopped kubelet.service. Dec 13 03:37:21.810522 systemd[1]: Starting kubelet.service... Dec 13 03:37:21.821339 systemd[1]: Reloading. Dec 13 03:37:21.858986 /usr/lib/systemd/system-generators/torcx-generator[2128]: time="2024-12-13T03:37:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:37:21.859012 /usr/lib/systemd/system-generators/torcx-generator[2128]: time="2024-12-13T03:37:21Z" level=info msg="torcx already run" Dec 13 03:37:21.921045 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:37:21.921057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:37:21.934578 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:37:21.994488 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 03:37:21.994523 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 03:37:21.994623 systemd[1]: Stopped kubelet.service. Dec 13 03:37:21.995369 systemd[1]: Starting kubelet.service... Dec 13 03:37:22.205917 systemd[1]: Started kubelet.service. Dec 13 03:37:22.277625 kubelet[2194]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:37:22.277625 kubelet[2194]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:37:22.277625 kubelet[2194]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:37:22.279671 kubelet[2194]: I1213 03:37:22.279495 2194 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:37:22.662514 kubelet[2194]: I1213 03:37:22.662467 2194 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 03:37:22.662514 kubelet[2194]: I1213 03:37:22.662479 2194 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:37:22.662667 kubelet[2194]: I1213 03:37:22.662626 2194 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 03:37:22.710780 kubelet[2194]: E1213 03:37:22.710681 2194 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.202.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.713798 kubelet[2194]: I1213 03:37:22.713730 2194 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:37:22.776826 kubelet[2194]: I1213 03:37:22.776765 2194 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:37:22.780385 kubelet[2194]: I1213 03:37:22.780284 2194 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:37:22.781020 kubelet[2194]: I1213 03:37:22.780539 2194 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-ab200a80e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 03:37:22.781549 kubelet[2194]: I1213 03:37:22.781042 2194 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:37:22.781549 kubelet[2194]: I1213 03:37:22.781090 2194 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 03:37:22.781549 kubelet[2194]: I1213 03:37:22.781410 2194 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:37:22.783447 kubelet[2194]: I1213 03:37:22.783416 2194 kubelet.go:400] "Attempting to sync node with API server" Dec 13 03:37:22.783611 kubelet[2194]: I1213 03:37:22.783451 2194 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:37:22.783611 kubelet[2194]: I1213 03:37:22.783503 2194 kubelet.go:312] "Adding apiserver pod source" Dec 13 03:37:22.783611 kubelet[2194]: I1213 03:37:22.783548 2194 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:37:22.784263 kubelet[2194]: W1213 03:37:22.784183 2194 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-ab200a80e9&limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.784405 kubelet[2194]: E1213 03:37:22.784286 2194 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.202.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-ab200a80e9&limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.784405 kubelet[2194]: W1213 03:37:22.784331 2194 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.784643 kubelet[2194]: E1213 03:37:22.784437 2194 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.202.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.792333 kubelet[2194]: I1213 03:37:22.792304 2194 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:37:22.798190 kubelet[2194]: I1213 03:37:22.798143 2194 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:37:22.798265 kubelet[2194]: W1213 03:37:22.798202 2194 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 03:37:22.798762 kubelet[2194]: I1213 03:37:22.798745 2194 server.go:1264] "Started kubelet" Dec 13 03:37:22.798871 kubelet[2194]: I1213 03:37:22.798802 2194 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:37:22.800319 kubelet[2194]: I1213 03:37:22.798847 2194 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:37:22.800999 kubelet[2194]: I1213 03:37:22.800963 2194 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:37:22.811740 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 03:37:22.811878 kubelet[2194]: I1213 03:37:22.811820 2194 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:37:22.815561 kubelet[2194]: I1213 03:37:22.815523 2194 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 03:37:22.815621 kubelet[2194]: I1213 03:37:22.815596 2194 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 03:37:22.815649 kubelet[2194]: I1213 03:37:22.815629 2194 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:37:22.815769 kubelet[2194]: E1213 03:37:22.815716 2194 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:37:22.815842 kubelet[2194]: I1213 03:37:22.815833 2194 server.go:455] "Adding debug handlers to kubelet server" Dec 13 03:37:22.815878 kubelet[2194]: W1213 03:37:22.815857 2194 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.815904 kubelet[2194]: E1213 03:37:22.815886 2194 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.202.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.815998 kubelet[2194]: E1213 03:37:22.815974 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-ab200a80e9?timeout=10s\": dial tcp 147.75.202.71:6443: connect: connection refused" interval="200ms" Dec 13 03:37:22.816067 kubelet[2194]: E1213 03:37:22.816011 2194 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.71:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.71:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-ab200a80e9.18109f54badc7eef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-ab200a80e9,UID:ci-3510.3.6-a-ab200a80e9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-ab200a80e9,},FirstTimestamp:2024-12-13 03:37:22.798722799 +0000 UTC m=+0.589953497,LastTimestamp:2024-12-13 03:37:22.798722799 +0000 UTC m=+0.589953497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-ab200a80e9,}" Dec 13 03:37:22.816128 kubelet[2194]: I1213 03:37:22.816121 2194 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:37:22.816168 kubelet[2194]: I1213 03:37:22.816158 2194 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:37:22.816607 kubelet[2194]: I1213 03:37:22.816600 2194 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:37:22.823151 kubelet[2194]: I1213 03:37:22.823141 2194 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:37:22.823151 kubelet[2194]: I1213 03:37:22.823147 2194 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:37:22.823232 kubelet[2194]: I1213 03:37:22.823156 2194 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:37:22.823972 kubelet[2194]: I1213 03:37:22.823932 2194 policy_none.go:49] "None policy: Start" Dec 13 03:37:22.823972 kubelet[2194]: I1213 03:37:22.823965 2194 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:37:22.824205 kubelet[2194]: I1213 03:37:22.824197 2194 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:37:22.824235 kubelet[2194]: I1213 03:37:22.824212 2194 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:37:22.824478 kubelet[2194]: I1213 03:37:22.824467 2194 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:37:22.824572 kubelet[2194]: I1213 03:37:22.824481 2194 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:37:22.824572 kubelet[2194]: I1213 03:37:22.824493 2194 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 03:37:22.824572 kubelet[2194]: E1213 03:37:22.824519 2194 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 03:37:22.824753 kubelet[2194]: W1213 03:37:22.824726 2194 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.202.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.824782 kubelet[2194]: E1213 03:37:22.824763 2194 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.202.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:22.826486 systemd[1]: Created slice kubepods.slice. Dec 13 03:37:22.828569 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 03:37:22.829892 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 03:37:22.843976 kubelet[2194]: I1213 03:37:22.843935 2194 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:37:22.844079 kubelet[2194]: I1213 03:37:22.844029 2194 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:37:22.844130 kubelet[2194]: I1213 03:37:22.844098 2194 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:37:22.844537 kubelet[2194]: E1213 03:37:22.844527 2194 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:22.922072 kubelet[2194]: I1213 03:37:22.921875 2194 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:22.922774 kubelet[2194]: E1213 03:37:22.922671 2194 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.71:6443/api/v1/nodes\": dial tcp 147.75.202.71:6443: connect: connection refused" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:22.924899 kubelet[2194]: I1213 03:37:22.924781 2194 topology_manager.go:215] "Topology Admit Handler" podUID="631a00df80e8d173006c5bd7a4702707" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:22.927697 kubelet[2194]: I1213 03:37:22.927667 2194 topology_manager.go:215] "Topology Admit Handler" podUID="b26e98d54b39a8b29bcb09a3fa81e726" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:22.928412 kubelet[2194]: I1213 03:37:22.928400 2194 topology_manager.go:215] "Topology Admit Handler" podUID="86583e677fdd5998f7d0ffb23d7b77c4" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:22.931375 systemd[1]: Created slice kubepods-burstable-pod631a00df80e8d173006c5bd7a4702707.slice. Dec 13 03:37:22.944542 systemd[1]: Created slice kubepods-burstable-podb26e98d54b39a8b29bcb09a3fa81e726.slice. Dec 13 03:37:22.957401 systemd[1]: Created slice kubepods-burstable-pod86583e677fdd5998f7d0ffb23d7b77c4.slice. Dec 13 03:37:23.016962 kubelet[2194]: E1213 03:37:23.016835 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-ab200a80e9?timeout=10s\": dial tcp 147.75.202.71:6443: connect: connection refused" interval="400ms" Dec 13 03:37:23.017204 kubelet[2194]: I1213 03:37:23.017006 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017204 kubelet[2194]: I1213 03:37:23.017091 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017204 kubelet[2194]: I1213 03:37:23.017145 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017204 kubelet[2194]: I1213 03:37:23.017193 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017647 kubelet[2194]: I1213 03:37:23.017239 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017647 kubelet[2194]: I1213 03:37:23.017285 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/631a00df80e8d173006c5bd7a4702707-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" (UID: \"631a00df80e8d173006c5bd7a4702707\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017647 kubelet[2194]: I1213 03:37:23.017326 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/631a00df80e8d173006c5bd7a4702707-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" (UID: \"631a00df80e8d173006c5bd7a4702707\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017647 kubelet[2194]: I1213 03:37:23.017402 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/631a00df80e8d173006c5bd7a4702707-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" (UID: \"631a00df80e8d173006c5bd7a4702707\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.017647 kubelet[2194]: I1213 03:37:23.017450 2194 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86583e677fdd5998f7d0ffb23d7b77c4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-ab200a80e9\" (UID: \"86583e677fdd5998f7d0ffb23d7b77c4\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.127049 kubelet[2194]: I1213 03:37:23.126992 2194 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.127771 kubelet[2194]: E1213 03:37:23.127712 2194 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.71:6443/api/v1/nodes\": dial tcp 147.75.202.71:6443: connect: connection refused" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.245734 env[1559]: time="2024-12-13T03:37:23.245539605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-ab200a80e9,Uid:631a00df80e8d173006c5bd7a4702707,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:23.256503 env[1559]: time="2024-12-13T03:37:23.256386299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-ab200a80e9,Uid:b26e98d54b39a8b29bcb09a3fa81e726,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:23.260726 env[1559]: time="2024-12-13T03:37:23.260612885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-ab200a80e9,Uid:86583e677fdd5998f7d0ffb23d7b77c4,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:23.418828 kubelet[2194]: E1213 03:37:23.418694 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-ab200a80e9?timeout=10s\": dial tcp 147.75.202.71:6443: connect: connection refused" interval="800ms" Dec 13 03:37:23.531809 kubelet[2194]: I1213 03:37:23.531715 2194 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.532595 kubelet[2194]: E1213 03:37:23.532475 2194 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.71:6443/api/v1/nodes\": dial tcp 147.75.202.71:6443: connect: connection refused" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:23.620083 kubelet[2194]: W1213 03:37:23.619918 2194 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-ab200a80e9&limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:23.620083 kubelet[2194]: E1213 03:37:23.620064 2194 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.202.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-ab200a80e9&limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:23.817670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981996445.mount: Deactivated successfully. Dec 13 03:37:23.839593 env[1559]: time="2024-12-13T03:37:23.839461512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.841657 env[1559]: time="2024-12-13T03:37:23.841601935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.843449 env[1559]: time="2024-12-13T03:37:23.843343889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.848309 env[1559]: time="2024-12-13T03:37:23.848192667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.856410 env[1559]: time="2024-12-13T03:37:23.856293995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.867116 env[1559]: time="2024-12-13T03:37:23.867043848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.872883 env[1559]: time="2024-12-13T03:37:23.872798734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.874153 env[1559]: time="2024-12-13T03:37:23.874081206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.875334 env[1559]: time="2024-12-13T03:37:23.875293874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.876571 env[1559]: time="2024-12-13T03:37:23.876521332Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.877879 env[1559]: time="2024-12-13T03:37:23.877810286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.879486 env[1559]: time="2024-12-13T03:37:23.879441125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:23.882909 env[1559]: time="2024-12-13T03:37:23.882840201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:23.882909 env[1559]: time="2024-12-13T03:37:23.882882430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:23.882909 env[1559]: time="2024-12-13T03:37:23.882903438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:23.883193 env[1559]: time="2024-12-13T03:37:23.883129464Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/212ae7318b9c567557f9469edc69ebbaa97aa0cd00589a92a8e7031701cf2c0f pid=2246 runtime=io.containerd.runc.v2 Dec 13 03:37:23.885131 env[1559]: time="2024-12-13T03:37:23.885059424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:23.885272 env[1559]: time="2024-12-13T03:37:23.885109495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:23.885272 env[1559]: time="2024-12-13T03:37:23.885134008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:23.885436 env[1559]: time="2024-12-13T03:37:23.885325787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f9e3c932bd5f3ac1252035215413d2d182dd5d05d1e9bc79a72a7914e1a8ede pid=2263 runtime=io.containerd.runc.v2 Dec 13 03:37:23.886416 env[1559]: time="2024-12-13T03:37:23.886333066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:23.886507 env[1559]: time="2024-12-13T03:37:23.886407091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:23.886507 env[1559]: time="2024-12-13T03:37:23.886433560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:23.886672 env[1559]: time="2024-12-13T03:37:23.886628848Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfd0a8a7ba0e94dff1feaa822335d0187e34a278bb4a4f3060666b3abb169424 pid=2273 runtime=io.containerd.runc.v2 Dec 13 03:37:23.895197 systemd[1]: Started cri-containerd-212ae7318b9c567557f9469edc69ebbaa97aa0cd00589a92a8e7031701cf2c0f.scope. Dec 13 03:37:23.896275 systemd[1]: Started cri-containerd-8f9e3c932bd5f3ac1252035215413d2d182dd5d05d1e9bc79a72a7914e1a8ede.scope. Dec 13 03:37:23.898552 systemd[1]: Started cri-containerd-dfd0a8a7ba0e94dff1feaa822335d0187e34a278bb4a4f3060666b3abb169424.scope. Dec 13 03:37:23.920341 env[1559]: time="2024-12-13T03:37:23.920316423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-ab200a80e9,Uid:631a00df80e8d173006c5bd7a4702707,Namespace:kube-system,Attempt:0,} returns sandbox id \"212ae7318b9c567557f9469edc69ebbaa97aa0cd00589a92a8e7031701cf2c0f\"" Dec 13 03:37:23.920911 env[1559]: time="2024-12-13T03:37:23.920894526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-ab200a80e9,Uid:b26e98d54b39a8b29bcb09a3fa81e726,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f9e3c932bd5f3ac1252035215413d2d182dd5d05d1e9bc79a72a7914e1a8ede\"" Dec 13 03:37:23.921888 env[1559]: time="2024-12-13T03:37:23.921872423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-ab200a80e9,Uid:86583e677fdd5998f7d0ffb23d7b77c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfd0a8a7ba0e94dff1feaa822335d0187e34a278bb4a4f3060666b3abb169424\"" Dec 13 03:37:23.922443 env[1559]: time="2024-12-13T03:37:23.922430255Z" level=info msg="CreateContainer within sandbox \"8f9e3c932bd5f3ac1252035215413d2d182dd5d05d1e9bc79a72a7914e1a8ede\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 03:37:23.922497 env[1559]: time="2024-12-13T03:37:23.922483501Z" level=info msg="CreateContainer within sandbox \"212ae7318b9c567557f9469edc69ebbaa97aa0cd00589a92a8e7031701cf2c0f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 03:37:23.922762 env[1559]: time="2024-12-13T03:37:23.922747483Z" level=info msg="CreateContainer within sandbox \"dfd0a8a7ba0e94dff1feaa822335d0187e34a278bb4a4f3060666b3abb169424\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 03:37:23.930178 env[1559]: time="2024-12-13T03:37:23.930129441Z" level=info msg="CreateContainer within sandbox \"212ae7318b9c567557f9469edc69ebbaa97aa0cd00589a92a8e7031701cf2c0f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"04cee88307138c7ae792d4294e218ae6f1ff0c4489be99b3dc6f031b2eabf022\"" Dec 13 03:37:23.930493 env[1559]: time="2024-12-13T03:37:23.930442851Z" level=info msg="StartContainer for \"04cee88307138c7ae792d4294e218ae6f1ff0c4489be99b3dc6f031b2eabf022\"" Dec 13 03:37:23.930624 env[1559]: time="2024-12-13T03:37:23.930581610Z" level=info msg="CreateContainer within sandbox \"8f9e3c932bd5f3ac1252035215413d2d182dd5d05d1e9bc79a72a7914e1a8ede\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"99f6f94ba8615c3b76b697fba2db1e8bd5afeb8dbfbf08f4e493d8975d5c2810\"" Dec 13 03:37:23.930797 env[1559]: time="2024-12-13T03:37:23.930742218Z" level=info msg="StartContainer for \"99f6f94ba8615c3b76b697fba2db1e8bd5afeb8dbfbf08f4e493d8975d5c2810\"" Dec 13 03:37:23.931420 env[1559]: time="2024-12-13T03:37:23.931405422Z" level=info msg="CreateContainer within sandbox \"dfd0a8a7ba0e94dff1feaa822335d0187e34a278bb4a4f3060666b3abb169424\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5859ecfb6ee2d74b7e73ed7ad074b4e318e3103b28bfa2ca823afb3fef52f7e5\"" Dec 13 03:37:23.931560 env[1559]: time="2024-12-13T03:37:23.931550499Z" level=info msg="StartContainer for \"5859ecfb6ee2d74b7e73ed7ad074b4e318e3103b28bfa2ca823afb3fef52f7e5\"" Dec 13 03:37:23.939055 systemd[1]: Started cri-containerd-04cee88307138c7ae792d4294e218ae6f1ff0c4489be99b3dc6f031b2eabf022.scope. Dec 13 03:37:23.939654 systemd[1]: Started cri-containerd-5859ecfb6ee2d74b7e73ed7ad074b4e318e3103b28bfa2ca823afb3fef52f7e5.scope. Dec 13 03:37:23.940182 systemd[1]: Started cri-containerd-99f6f94ba8615c3b76b697fba2db1e8bd5afeb8dbfbf08f4e493d8975d5c2810.scope. Dec 13 03:37:23.950163 kubelet[2194]: W1213 03:37:23.950125 2194 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:23.950163 kubelet[2194]: E1213 03:37:23.950166 2194 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.202.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.71:6443: connect: connection refused Dec 13 03:37:23.963482 env[1559]: time="2024-12-13T03:37:23.963454613Z" level=info msg="StartContainer for \"5859ecfb6ee2d74b7e73ed7ad074b4e318e3103b28bfa2ca823afb3fef52f7e5\" returns successfully" Dec 13 03:37:23.963593 env[1559]: time="2024-12-13T03:37:23.963572749Z" level=info msg="StartContainer for \"04cee88307138c7ae792d4294e218ae6f1ff0c4489be99b3dc6f031b2eabf022\" returns successfully" Dec 13 03:37:23.964987 env[1559]: time="2024-12-13T03:37:23.964971923Z" level=info msg="StartContainer for \"99f6f94ba8615c3b76b697fba2db1e8bd5afeb8dbfbf08f4e493d8975d5c2810\" returns successfully" Dec 13 03:37:24.334148 kubelet[2194]: I1213 03:37:24.334129 2194 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:24.393748 kubelet[2194]: E1213 03:37:24.393725 2194 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-ab200a80e9\" not found" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:24.497858 kubelet[2194]: I1213 03:37:24.497836 2194 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:24.502193 kubelet[2194]: E1213 03:37:24.502180 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:24.602641 kubelet[2194]: E1213 03:37:24.602593 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:24.703025 kubelet[2194]: E1213 03:37:24.702975 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:24.803595 kubelet[2194]: E1213 03:37:24.803496 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:24.903855 kubelet[2194]: E1213 03:37:24.903632 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:25.004754 kubelet[2194]: E1213 03:37:25.004642 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:25.104904 kubelet[2194]: E1213 03:37:25.104784 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:25.205201 kubelet[2194]: E1213 03:37:25.204985 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:25.305526 kubelet[2194]: E1213 03:37:25.305423 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:25.406331 kubelet[2194]: E1213 03:37:25.406271 2194 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-ab200a80e9\" not found" Dec 13 03:37:25.786088 kubelet[2194]: I1213 03:37:25.786038 2194 apiserver.go:52] "Watching apiserver" Dec 13 03:37:25.816824 kubelet[2194]: I1213 03:37:25.816765 2194 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 03:37:25.850834 kubelet[2194]: W1213 03:37:25.850777 2194 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:37:27.195345 systemd[1]: Reloading. Dec 13 03:37:27.229583 /usr/lib/systemd/system-generators/torcx-generator[2533]: time="2024-12-13T03:37:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:37:27.229601 /usr/lib/systemd/system-generators/torcx-generator[2533]: time="2024-12-13T03:37:27Z" level=info msg="torcx already run" Dec 13 03:37:27.307618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:37:27.307629 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:37:27.322190 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:37:27.390333 systemd[1]: Stopping kubelet.service... Dec 13 03:37:27.403760 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 03:37:27.403855 systemd[1]: Stopped kubelet.service. Dec 13 03:37:27.403878 systemd[1]: kubelet.service: Consumed 1.063s CPU time. Dec 13 03:37:27.404740 systemd[1]: Starting kubelet.service... Dec 13 03:37:27.599284 systemd[1]: Started kubelet.service. Dec 13 03:37:27.631519 kubelet[2597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:37:27.631519 kubelet[2597]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:37:27.631519 kubelet[2597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:37:27.632035 kubelet[2597]: I1213 03:37:27.631526 2597 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:37:27.635436 kubelet[2597]: I1213 03:37:27.635380 2597 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 03:37:27.635436 kubelet[2597]: I1213 03:37:27.635402 2597 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:37:27.635594 kubelet[2597]: I1213 03:37:27.635553 2597 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 03:37:27.636547 kubelet[2597]: I1213 03:37:27.636533 2597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 03:37:27.637631 kubelet[2597]: I1213 03:37:27.637551 2597 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:37:27.671529 kubelet[2597]: I1213 03:37:27.671437 2597 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:37:27.672069 kubelet[2597]: I1213 03:37:27.671939 2597 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:37:27.672464 kubelet[2597]: I1213 03:37:27.672020 2597 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-ab200a80e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 03:37:27.672775 kubelet[2597]: I1213 03:37:27.672490 2597 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:37:27.672775 kubelet[2597]: I1213 03:37:27.672521 2597 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 03:37:27.672775 kubelet[2597]: I1213 03:37:27.672607 2597 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:37:27.673100 kubelet[2597]: I1213 03:37:27.672798 2597 kubelet.go:400] "Attempting to sync node with API server" Dec 13 03:37:27.673100 kubelet[2597]: I1213 03:37:27.672828 2597 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:37:27.673100 kubelet[2597]: I1213 03:37:27.672875 2597 kubelet.go:312] "Adding apiserver pod source" Dec 13 03:37:27.673100 kubelet[2597]: I1213 03:37:27.672910 2597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:37:27.674925 kubelet[2597]: I1213 03:37:27.674844 2597 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:37:27.675759 kubelet[2597]: I1213 03:37:27.675715 2597 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:37:27.677247 kubelet[2597]: I1213 03:37:27.677192 2597 server.go:1264] "Started kubelet" Dec 13 03:37:27.677910 kubelet[2597]: I1213 03:37:27.677746 2597 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:37:27.678142 kubelet[2597]: I1213 03:37:27.677781 2597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:37:27.678752 kubelet[2597]: I1213 03:37:27.678681 2597 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:37:27.680912 kubelet[2597]: I1213 03:37:27.680868 2597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:37:27.681102 kubelet[2597]: E1213 03:37:27.681027 2597 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:37:27.681102 kubelet[2597]: I1213 03:37:27.681072 2597 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 03:37:27.681387 kubelet[2597]: I1213 03:37:27.681138 2597 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 03:37:27.681536 kubelet[2597]: I1213 03:37:27.681480 2597 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:37:27.681961 kubelet[2597]: I1213 03:37:27.681928 2597 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:37:27.682110 kubelet[2597]: I1213 03:37:27.682085 2597 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:37:27.682234 kubelet[2597]: I1213 03:37:27.682210 2597 server.go:455] "Adding debug handlers to kubelet server" Dec 13 03:37:27.683957 kubelet[2597]: I1213 03:37:27.683931 2597 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:37:27.693971 kubelet[2597]: I1213 03:37:27.693929 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:37:27.697482 kubelet[2597]: I1213 03:37:27.697446 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:37:27.697482 kubelet[2597]: I1213 03:37:27.697483 2597 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:37:27.697691 kubelet[2597]: I1213 03:37:27.697507 2597 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 03:37:27.697691 kubelet[2597]: E1213 03:37:27.697558 2597 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 03:37:27.715128 kubelet[2597]: I1213 03:37:27.715072 2597 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:37:27.715128 kubelet[2597]: I1213 03:37:27.715087 2597 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:37:27.715128 kubelet[2597]: I1213 03:37:27.715102 2597 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:37:27.715265 kubelet[2597]: I1213 03:37:27.715227 2597 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 03:37:27.715265 kubelet[2597]: I1213 03:37:27.715237 2597 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 03:37:27.715265 kubelet[2597]: I1213 03:37:27.715251 2597 policy_none.go:49] "None policy: Start" Dec 13 03:37:27.715603 kubelet[2597]: I1213 03:37:27.715594 2597 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:37:27.715651 kubelet[2597]: I1213 03:37:27.715607 2597 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:37:27.715708 kubelet[2597]: I1213 03:37:27.715702 2597 state_mem.go:75] "Updated machine memory state" Dec 13 03:37:27.717710 kubelet[2597]: I1213 03:37:27.717701 2597 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:37:27.717846 kubelet[2597]: I1213 03:37:27.717787 2597 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:37:27.717890 kubelet[2597]: I1213 03:37:27.717847 2597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:37:27.784674 kubelet[2597]: I1213 03:37:27.784645 2597 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.789463 kubelet[2597]: I1213 03:37:27.789399 2597 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.789612 kubelet[2597]: I1213 03:37:27.789504 2597 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.798330 kubelet[2597]: I1213 03:37:27.798223 2597 topology_manager.go:215] "Topology Admit Handler" podUID="631a00df80e8d173006c5bd7a4702707" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.798524 kubelet[2597]: I1213 03:37:27.798396 2597 topology_manager.go:215] "Topology Admit Handler" podUID="b26e98d54b39a8b29bcb09a3fa81e726" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.798664 kubelet[2597]: I1213 03:37:27.798552 2597 topology_manager.go:215] "Topology Admit Handler" podUID="86583e677fdd5998f7d0ffb23d7b77c4" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.805545 kubelet[2597]: W1213 03:37:27.805453 2597 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:37:27.808475 kubelet[2597]: W1213 03:37:27.808394 2597 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:37:27.808475 kubelet[2597]: W1213 03:37:27.808399 2597 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:37:27.808796 kubelet[2597]: E1213 03:37:27.808560 2597 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882307 kubelet[2597]: I1213 03:37:27.882079 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/631a00df80e8d173006c5bd7a4702707-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" (UID: \"631a00df80e8d173006c5bd7a4702707\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882307 kubelet[2597]: I1213 03:37:27.882167 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882307 kubelet[2597]: I1213 03:37:27.882230 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882307 kubelet[2597]: I1213 03:37:27.882280 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/631a00df80e8d173006c5bd7a4702707-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" (UID: \"631a00df80e8d173006c5bd7a4702707\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882950 kubelet[2597]: I1213 03:37:27.882445 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882950 kubelet[2597]: I1213 03:37:27.882564 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882950 kubelet[2597]: I1213 03:37:27.882640 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b26e98d54b39a8b29bcb09a3fa81e726-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" (UID: \"b26e98d54b39a8b29bcb09a3fa81e726\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882950 kubelet[2597]: I1213 03:37:27.882712 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86583e677fdd5998f7d0ffb23d7b77c4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-ab200a80e9\" (UID: \"86583e677fdd5998f7d0ffb23d7b77c4\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:27.882950 kubelet[2597]: I1213 03:37:27.882786 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/631a00df80e8d173006c5bd7a4702707-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" (UID: \"631a00df80e8d173006c5bd7a4702707\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:28.217453 sudo[2640]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 03:37:28.218134 sudo[2640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 03:37:28.594216 sudo[2640]: pam_unix(sudo:session): session closed for user root Dec 13 03:37:28.673670 kubelet[2597]: I1213 03:37:28.673623 2597 apiserver.go:52] "Watching apiserver" Dec 13 03:37:28.681328 kubelet[2597]: I1213 03:37:28.681316 2597 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 03:37:28.711379 kubelet[2597]: W1213 03:37:28.711364 2597 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:37:28.711475 kubelet[2597]: E1213 03:37:28.711396 2597 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-ab200a80e9\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:28.711771 kubelet[2597]: W1213 03:37:28.711761 2597 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:37:28.711822 kubelet[2597]: W1213 03:37:28.711786 2597 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:37:28.711822 kubelet[2597]: E1213 03:37:28.711793 2597 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-ab200a80e9\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:28.711822 kubelet[2597]: E1213 03:37:28.711809 2597 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.6-a-ab200a80e9\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.6-a-ab200a80e9" Dec 13 03:37:28.718378 kubelet[2597]: I1213 03:37:28.718344 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-ab200a80e9" podStartSLOduration=3.718327183 podStartE2EDuration="3.718327183s" podCreationTimestamp="2024-12-13 03:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:37:28.718148101 +0000 UTC m=+1.116077340" watchObservedRunningTime="2024-12-13 03:37:28.718327183 +0000 UTC m=+1.116256422" Dec 13 03:37:28.723097 kubelet[2597]: I1213 03:37:28.723079 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-ab200a80e9" podStartSLOduration=1.723072749 podStartE2EDuration="1.723072749s" podCreationTimestamp="2024-12-13 03:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:37:28.722984595 +0000 UTC m=+1.120913834" watchObservedRunningTime="2024-12-13 03:37:28.723072749 +0000 UTC m=+1.121001987" Dec 13 03:37:28.727670 kubelet[2597]: I1213 03:37:28.727654 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-ab200a80e9" podStartSLOduration=1.727649041 podStartE2EDuration="1.727649041s" podCreationTimestamp="2024-12-13 03:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:37:28.727487335 +0000 UTC m=+1.125416575" watchObservedRunningTime="2024-12-13 03:37:28.727649041 +0000 UTC m=+1.125578277" Dec 13 03:37:29.970149 sudo[1731]: pam_unix(sudo:session): session closed for user root Dec 13 03:37:29.971150 sshd[1728]: pam_unix(sshd:session): session closed for user core Dec 13 03:37:29.973007 systemd[1]: sshd@6-147.75.202.71:22-139.178.68.195:53124.service: Deactivated successfully. Dec 13 03:37:29.973538 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 03:37:29.973642 systemd[1]: session-9.scope: Consumed 3.371s CPU time. Dec 13 03:37:29.974034 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Dec 13 03:37:29.974722 systemd-logind[1551]: Removed session 9. Dec 13 03:37:34.080643 systemd[1]: Started sshd@7-147.75.202.71:22-175.125.95.140:40140.service. Dec 13 03:37:34.197393 sshd[2744]: kex_exchange_identification: banner line contains invalid characters Dec 13 03:37:34.197393 sshd[2744]: banner exchange: Connection from 175.125.95.140 port 40140: invalid format Dec 13 03:37:34.198861 systemd[1]: sshd@7-147.75.202.71:22-175.125.95.140:40140.service: Deactivated successfully. Dec 13 03:37:34.987502 update_engine[1553]: I1213 03:37:34.987378 1553 update_attempter.cc:509] Updating boot flags... Dec 13 03:37:41.716682 kubelet[2597]: I1213 03:37:41.716647 2597 topology_manager.go:215] "Topology Admit Handler" podUID="ac06e6b0-8451-4e55-a942-c51a4b7d4792" podNamespace="kube-system" podName="kube-proxy-stjbw" Dec 13 03:37:41.721910 systemd[1]: Created slice kubepods-besteffort-podac06e6b0_8451_4e55_a942_c51a4b7d4792.slice. Dec 13 03:37:41.726952 kubelet[2597]: I1213 03:37:41.726140 2597 topology_manager.go:215] "Topology Admit Handler" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" podNamespace="kube-system" podName="cilium-g44rp" Dec 13 03:37:41.733553 systemd[1]: Created slice kubepods-burstable-pod7b4a6e13_1afa_4f37_bb31_9277ff4ed174.slice. Dec 13 03:37:41.746229 kubelet[2597]: I1213 03:37:41.746194 2597 topology_manager.go:215] "Topology Admit Handler" podUID="29e294f7-7c1f-4f44-866d-d009b881d081" podNamespace="kube-system" podName="cilium-operator-599987898-lw6cg" Dec 13 03:37:41.749668 systemd[1]: Created slice kubepods-besteffort-pod29e294f7_7c1f_4f44_866d_d009b881d081.slice. Dec 13 03:37:41.772327 kubelet[2597]: I1213 03:37:41.772249 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-lib-modules\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.772651 kubelet[2597]: I1213 03:37:41.772350 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-run\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.772651 kubelet[2597]: I1213 03:37:41.772438 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-bpf-maps\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.772651 kubelet[2597]: I1213 03:37:41.772488 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-net\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.772651 kubelet[2597]: I1213 03:37:41.772540 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac06e6b0-8451-4e55-a942-c51a4b7d4792-lib-modules\") pod \"kube-proxy-stjbw\" (UID: \"ac06e6b0-8451-4e55-a942-c51a4b7d4792\") " pod="kube-system/kube-proxy-stjbw" Dec 13 03:37:41.772651 kubelet[2597]: I1213 03:37:41.772587 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hostproc\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.772651 kubelet[2597]: I1213 03:37:41.772633 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-clustermesh-secrets\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.773480 kubelet[2597]: I1213 03:37:41.772687 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq2j5\" (UniqueName: \"kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-kube-api-access-rq2j5\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.773480 kubelet[2597]: I1213 03:37:41.772832 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac06e6b0-8451-4e55-a942-c51a4b7d4792-kube-proxy\") pod \"kube-proxy-stjbw\" (UID: \"ac06e6b0-8451-4e55-a942-c51a4b7d4792\") " pod="kube-system/kube-proxy-stjbw" Dec 13 03:37:41.773480 kubelet[2597]: I1213 03:37:41.772927 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-config-path\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.773480 kubelet[2597]: I1213 03:37:41.772980 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-kernel\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.773480 kubelet[2597]: I1213 03:37:41.773073 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwx9w\" (UniqueName: \"kubernetes.io/projected/29e294f7-7c1f-4f44-866d-d009b881d081-kube-api-access-dwx9w\") pod \"cilium-operator-599987898-lw6cg\" (UID: \"29e294f7-7c1f-4f44-866d-d009b881d081\") " pod="kube-system/cilium-operator-599987898-lw6cg" Dec 13 03:37:41.774087 kubelet[2597]: I1213 03:37:41.773169 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sltpz\" (UniqueName: \"kubernetes.io/projected/ac06e6b0-8451-4e55-a942-c51a4b7d4792-kube-api-access-sltpz\") pod \"kube-proxy-stjbw\" (UID: \"ac06e6b0-8451-4e55-a942-c51a4b7d4792\") " pod="kube-system/kube-proxy-stjbw" Dec 13 03:37:41.774087 kubelet[2597]: I1213 03:37:41.773230 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hubble-tls\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.774087 kubelet[2597]: I1213 03:37:41.773282 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29e294f7-7c1f-4f44-866d-d009b881d081-cilium-config-path\") pod \"cilium-operator-599987898-lw6cg\" (UID: \"29e294f7-7c1f-4f44-866d-d009b881d081\") " pod="kube-system/cilium-operator-599987898-lw6cg" Dec 13 03:37:41.774087 kubelet[2597]: I1213 03:37:41.773332 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-etc-cni-netd\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.774087 kubelet[2597]: I1213 03:37:41.773407 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-cgroup\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.774648 kubelet[2597]: I1213 03:37:41.773459 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cni-path\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.774648 kubelet[2597]: I1213 03:37:41.773506 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-xtables-lock\") pod \"cilium-g44rp\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " pod="kube-system/cilium-g44rp" Dec 13 03:37:41.774648 kubelet[2597]: I1213 03:37:41.773555 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac06e6b0-8451-4e55-a942-c51a4b7d4792-xtables-lock\") pod \"kube-proxy-stjbw\" (UID: \"ac06e6b0-8451-4e55-a942-c51a4b7d4792\") " pod="kube-system/kube-proxy-stjbw" Dec 13 03:37:41.889621 kubelet[2597]: I1213 03:37:41.889523 2597 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 03:37:41.890611 env[1559]: time="2024-12-13T03:37:41.890472428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 03:37:41.891661 kubelet[2597]: I1213 03:37:41.891066 2597 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 03:37:42.032832 env[1559]: time="2024-12-13T03:37:42.032704047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stjbw,Uid:ac06e6b0-8451-4e55-a942-c51a4b7d4792,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:42.036995 env[1559]: time="2024-12-13T03:37:42.036914625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g44rp,Uid:7b4a6e13-1afa-4f37-bb31-9277ff4ed174,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:42.052574 env[1559]: time="2024-12-13T03:37:42.052469363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lw6cg,Uid:29e294f7-7c1f-4f44-866d-d009b881d081,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:42.060715 env[1559]: time="2024-12-13T03:37:42.060513135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:42.060715 env[1559]: time="2024-12-13T03:37:42.060635559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:42.060715 env[1559]: time="2024-12-13T03:37:42.060674997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:42.061400 env[1559]: time="2024-12-13T03:37:42.061183640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16081657f87b36d2941881c5ee95c6f0f0e4a45fa4a92b22d7b8fdbbc6b0eac5 pid=2781 runtime=io.containerd.runc.v2 Dec 13 03:37:42.063557 env[1559]: time="2024-12-13T03:37:42.063397158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:42.063557 env[1559]: time="2024-12-13T03:37:42.063508947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:42.063981 env[1559]: time="2024-12-13T03:37:42.063559410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:42.064206 env[1559]: time="2024-12-13T03:37:42.064072057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0 pid=2789 runtime=io.containerd.runc.v2 Dec 13 03:37:42.081106 env[1559]: time="2024-12-13T03:37:42.080939063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:42.081106 env[1559]: time="2024-12-13T03:37:42.081051401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:42.081562 env[1559]: time="2024-12-13T03:37:42.081115097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:42.081785 env[1559]: time="2024-12-13T03:37:42.081622228Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f pid=2822 runtime=io.containerd.runc.v2 Dec 13 03:37:42.092687 systemd[1]: Started cri-containerd-16081657f87b36d2941881c5ee95c6f0f0e4a45fa4a92b22d7b8fdbbc6b0eac5.scope. Dec 13 03:37:42.094485 systemd[1]: Started cri-containerd-27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0.scope. Dec 13 03:37:42.101905 systemd[1]: Started cri-containerd-77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f.scope. Dec 13 03:37:42.113106 env[1559]: time="2024-12-13T03:37:42.113070593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stjbw,Uid:ac06e6b0-8451-4e55-a942-c51a4b7d4792,Namespace:kube-system,Attempt:0,} returns sandbox id \"16081657f87b36d2941881c5ee95c6f0f0e4a45fa4a92b22d7b8fdbbc6b0eac5\"" Dec 13 03:37:42.113106 env[1559]: time="2024-12-13T03:37:42.113080702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g44rp,Uid:7b4a6e13-1afa-4f37-bb31-9277ff4ed174,Namespace:kube-system,Attempt:0,} returns sandbox id \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\"" Dec 13 03:37:42.114297 env[1559]: time="2024-12-13T03:37:42.114280743Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 03:37:42.115050 env[1559]: time="2024-12-13T03:37:42.115032302Z" level=info msg="CreateContainer within sandbox \"16081657f87b36d2941881c5ee95c6f0f0e4a45fa4a92b22d7b8fdbbc6b0eac5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 03:37:42.122953 env[1559]: time="2024-12-13T03:37:42.122893286Z" level=info msg="CreateContainer within sandbox \"16081657f87b36d2941881c5ee95c6f0f0e4a45fa4a92b22d7b8fdbbc6b0eac5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"230e896aa87c5a66e066bc48ed8c0ab62a743212e784c692f3eb799e856fcc92\"" Dec 13 03:37:42.123233 env[1559]: time="2024-12-13T03:37:42.123218352Z" level=info msg="StartContainer for \"230e896aa87c5a66e066bc48ed8c0ab62a743212e784c692f3eb799e856fcc92\"" Dec 13 03:37:42.131348 systemd[1]: Started cri-containerd-230e896aa87c5a66e066bc48ed8c0ab62a743212e784c692f3eb799e856fcc92.scope. Dec 13 03:37:42.131881 env[1559]: time="2024-12-13T03:37:42.131843710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lw6cg,Uid:29e294f7-7c1f-4f44-866d-d009b881d081,Namespace:kube-system,Attempt:0,} returns sandbox id \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\"" Dec 13 03:37:42.145536 env[1559]: time="2024-12-13T03:37:42.145511680Z" level=info msg="StartContainer for \"230e896aa87c5a66e066bc48ed8c0ab62a743212e784c692f3eb799e856fcc92\" returns successfully" Dec 13 03:37:42.758577 kubelet[2597]: I1213 03:37:42.758471 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stjbw" podStartSLOduration=1.758435119 podStartE2EDuration="1.758435119s" podCreationTimestamp="2024-12-13 03:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:37:42.758187207 +0000 UTC m=+15.156116533" watchObservedRunningTime="2024-12-13 03:37:42.758435119 +0000 UTC m=+15.156364411" Dec 13 03:37:46.966281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633537257.mount: Deactivated successfully. Dec 13 03:37:48.731591 env[1559]: time="2024-12-13T03:37:48.731542692Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:48.732154 env[1559]: time="2024-12-13T03:37:48.732113466Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:48.732885 env[1559]: time="2024-12-13T03:37:48.732844963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:48.733571 env[1559]: time="2024-12-13T03:37:48.733528540Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 03:37:48.734134 env[1559]: time="2024-12-13T03:37:48.734090378Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 03:37:48.734782 env[1559]: time="2024-12-13T03:37:48.734710479Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:37:48.757124 env[1559]: time="2024-12-13T03:37:48.757077817Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\"" Dec 13 03:37:48.757332 env[1559]: time="2024-12-13T03:37:48.757320121Z" level=info msg="StartContainer for \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\"" Dec 13 03:37:48.766100 systemd[1]: Started cri-containerd-db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13.scope. Dec 13 03:37:48.776918 env[1559]: time="2024-12-13T03:37:48.776889729Z" level=info msg="StartContainer for \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\" returns successfully" Dec 13 03:37:48.782814 systemd[1]: cri-containerd-db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13.scope: Deactivated successfully. Dec 13 03:37:49.742758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13-rootfs.mount: Deactivated successfully. Dec 13 03:37:49.932069 env[1559]: time="2024-12-13T03:37:49.931929109Z" level=info msg="shim disconnected" id=db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13 Dec 13 03:37:49.932069 env[1559]: time="2024-12-13T03:37:49.932027366Z" level=warning msg="cleaning up after shim disconnected" id=db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13 namespace=k8s.io Dec 13 03:37:49.932069 env[1559]: time="2024-12-13T03:37:49.932058236Z" level=info msg="cleaning up dead shim" Dec 13 03:37:49.947408 env[1559]: time="2024-12-13T03:37:49.947284027Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:37:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3108 runtime=io.containerd.runc.v2\n" Dec 13 03:37:50.764750 env[1559]: time="2024-12-13T03:37:50.764614054Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:37:50.779455 env[1559]: time="2024-12-13T03:37:50.779335022Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\"" Dec 13 03:37:50.780119 env[1559]: time="2024-12-13T03:37:50.780104202Z" level=info msg="StartContainer for \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\"" Dec 13 03:37:50.789376 systemd[1]: Started cri-containerd-bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2.scope. Dec 13 03:37:50.800544 env[1559]: time="2024-12-13T03:37:50.800480295Z" level=info msg="StartContainer for \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\" returns successfully" Dec 13 03:37:50.806642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:37:50.806763 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:37:50.806871 systemd[1]: Stopping systemd-sysctl.service... Dec 13 03:37:50.807737 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:37:50.809016 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 03:37:50.809464 systemd[1]: cri-containerd-bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2.scope: Deactivated successfully. Dec 13 03:37:50.811741 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:37:50.840398 env[1559]: time="2024-12-13T03:37:50.840282769Z" level=info msg="shim disconnected" id=bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2 Dec 13 03:37:50.840742 env[1559]: time="2024-12-13T03:37:50.840400997Z" level=warning msg="cleaning up after shim disconnected" id=bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2 namespace=k8s.io Dec 13 03:37:50.840742 env[1559]: time="2024-12-13T03:37:50.840436995Z" level=info msg="cleaning up dead shim" Dec 13 03:37:50.856548 env[1559]: time="2024-12-13T03:37:50.856446278Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:37:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3171 runtime=io.containerd.runc.v2\n" Dec 13 03:37:51.514904 env[1559]: time="2024-12-13T03:37:51.514849373Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:51.515471 env[1559]: time="2024-12-13T03:37:51.515432564Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:51.516063 env[1559]: time="2024-12-13T03:37:51.516022054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:37:51.516345 env[1559]: time="2024-12-13T03:37:51.516307198Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 03:37:51.518034 env[1559]: time="2024-12-13T03:37:51.518019556Z" level=info msg="CreateContainer within sandbox \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 03:37:51.522449 env[1559]: time="2024-12-13T03:37:51.522422006Z" level=info msg="CreateContainer within sandbox \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\"" Dec 13 03:37:51.522715 env[1559]: time="2024-12-13T03:37:51.522671829Z" level=info msg="StartContainer for \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\"" Dec 13 03:37:51.530898 systemd[1]: Started cri-containerd-0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe.scope. Dec 13 03:37:51.543857 env[1559]: time="2024-12-13T03:37:51.543835629Z" level=info msg="StartContainer for \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\" returns successfully" Dec 13 03:37:51.772741 env[1559]: time="2024-12-13T03:37:51.772510321Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:37:51.782389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2-rootfs.mount: Deactivated successfully. Dec 13 03:37:51.790910 env[1559]: time="2024-12-13T03:37:51.790827106Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\"" Dec 13 03:37:51.791977 env[1559]: time="2024-12-13T03:37:51.791885471Z" level=info msg="StartContainer for \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\"" Dec 13 03:37:51.805767 kubelet[2597]: I1213 03:37:51.805689 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-lw6cg" podStartSLOduration=1.421311573 podStartE2EDuration="10.805663569s" podCreationTimestamp="2024-12-13 03:37:41 +0000 UTC" firstStartedPulling="2024-12-13 03:37:42.132724707 +0000 UTC m=+14.530653942" lastFinishedPulling="2024-12-13 03:37:51.5170767 +0000 UTC m=+23.915005938" observedRunningTime="2024-12-13 03:37:51.805251676 +0000 UTC m=+24.203180953" watchObservedRunningTime="2024-12-13 03:37:51.805663569 +0000 UTC m=+24.203592835" Dec 13 03:37:51.821909 systemd[1]: Started cri-containerd-abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934.scope. Dec 13 03:37:51.840666 systemd[1]: cri-containerd-abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934.scope: Deactivated successfully. Dec 13 03:37:51.844852 env[1559]: time="2024-12-13T03:37:51.844805339Z" level=info msg="StartContainer for \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\" returns successfully" Dec 13 03:37:51.997666 env[1559]: time="2024-12-13T03:37:51.997597132Z" level=info msg="shim disconnected" id=abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934 Dec 13 03:37:51.997666 env[1559]: time="2024-12-13T03:37:51.997666720Z" level=warning msg="cleaning up after shim disconnected" id=abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934 namespace=k8s.io Dec 13 03:37:51.998029 env[1559]: time="2024-12-13T03:37:51.997687902Z" level=info msg="cleaning up dead shim" Dec 13 03:37:52.006481 env[1559]: time="2024-12-13T03:37:52.006412511Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:37:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3277 runtime=io.containerd.runc.v2\n" Dec 13 03:37:52.779797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934-rootfs.mount: Deactivated successfully. Dec 13 03:37:52.781335 env[1559]: time="2024-12-13T03:37:52.781305639Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:37:52.798663 env[1559]: time="2024-12-13T03:37:52.798558044Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\"" Dec 13 03:37:52.799630 env[1559]: time="2024-12-13T03:37:52.799516481Z" level=info msg="StartContainer for \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\"" Dec 13 03:37:52.837987 systemd[1]: Started cri-containerd-863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9.scope. Dec 13 03:37:52.875370 env[1559]: time="2024-12-13T03:37:52.875289498Z" level=info msg="StartContainer for \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\" returns successfully" Dec 13 03:37:52.876816 systemd[1]: cri-containerd-863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9.scope: Deactivated successfully. Dec 13 03:37:52.903268 env[1559]: time="2024-12-13T03:37:52.903194271Z" level=info msg="shim disconnected" id=863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9 Dec 13 03:37:52.903268 env[1559]: time="2024-12-13T03:37:52.903259034Z" level=warning msg="cleaning up after shim disconnected" id=863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9 namespace=k8s.io Dec 13 03:37:52.903653 env[1559]: time="2024-12-13T03:37:52.903277233Z" level=info msg="cleaning up dead shim" Dec 13 03:37:52.913978 env[1559]: time="2024-12-13T03:37:52.913898092Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:37:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3331 runtime=io.containerd.runc.v2\n" Dec 13 03:37:53.780130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9-rootfs.mount: Deactivated successfully. Dec 13 03:37:53.790304 env[1559]: time="2024-12-13T03:37:53.790208573Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:37:53.809560 env[1559]: time="2024-12-13T03:37:53.809454203Z" level=info msg="CreateContainer within sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\"" Dec 13 03:37:53.810540 env[1559]: time="2024-12-13T03:37:53.810419777Z" level=info msg="StartContainer for \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\"" Dec 13 03:37:53.835781 systemd[1]: Started cri-containerd-a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d.scope. Dec 13 03:37:53.856979 env[1559]: time="2024-12-13T03:37:53.856941340Z" level=info msg="StartContainer for \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\" returns successfully" Dec 13 03:37:53.924364 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:37:53.927633 kubelet[2597]: I1213 03:37:53.927620 2597 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 03:37:53.937918 kubelet[2597]: I1213 03:37:53.937896 2597 topology_manager.go:215] "Topology Admit Handler" podUID="0f5646a8-e393-4137-a23d-11b3bca6dc64" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tk2ns" Dec 13 03:37:53.938686 kubelet[2597]: I1213 03:37:53.938669 2597 topology_manager.go:215] "Topology Admit Handler" podUID="41a70788-bd3a-4b1f-ad6b-ff0ca55b2602" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v7zpg" Dec 13 03:37:53.940962 systemd[1]: Created slice kubepods-burstable-pod0f5646a8_e393_4137_a23d_11b3bca6dc64.slice. Dec 13 03:37:53.943402 systemd[1]: Created slice kubepods-burstable-pod41a70788_bd3a_4b1f_ad6b_ff0ca55b2602.slice. Dec 13 03:37:53.964253 kubelet[2597]: I1213 03:37:53.964235 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41a70788-bd3a-4b1f-ad6b-ff0ca55b2602-config-volume\") pod \"coredns-7db6d8ff4d-v7zpg\" (UID: \"41a70788-bd3a-4b1f-ad6b-ff0ca55b2602\") " pod="kube-system/coredns-7db6d8ff4d-v7zpg" Dec 13 03:37:53.964357 kubelet[2597]: I1213 03:37:53.964255 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5646a8-e393-4137-a23d-11b3bca6dc64-config-volume\") pod \"coredns-7db6d8ff4d-tk2ns\" (UID: \"0f5646a8-e393-4137-a23d-11b3bca6dc64\") " pod="kube-system/coredns-7db6d8ff4d-tk2ns" Dec 13 03:37:53.964357 kubelet[2597]: I1213 03:37:53.964283 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b58j\" (UniqueName: \"kubernetes.io/projected/0f5646a8-e393-4137-a23d-11b3bca6dc64-kube-api-access-6b58j\") pod \"coredns-7db6d8ff4d-tk2ns\" (UID: \"0f5646a8-e393-4137-a23d-11b3bca6dc64\") " pod="kube-system/coredns-7db6d8ff4d-tk2ns" Dec 13 03:37:53.964357 kubelet[2597]: I1213 03:37:53.964322 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjzh8\" (UniqueName: \"kubernetes.io/projected/41a70788-bd3a-4b1f-ad6b-ff0ca55b2602-kube-api-access-jjzh8\") pod \"coredns-7db6d8ff4d-v7zpg\" (UID: \"41a70788-bd3a-4b1f-ad6b-ff0ca55b2602\") " pod="kube-system/coredns-7db6d8ff4d-v7zpg" Dec 13 03:37:54.078428 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:37:54.244111 env[1559]: time="2024-12-13T03:37:54.243957323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tk2ns,Uid:0f5646a8-e393-4137-a23d-11b3bca6dc64,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:54.246194 env[1559]: time="2024-12-13T03:37:54.246104461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7zpg,Uid:41a70788-bd3a-4b1f-ad6b-ff0ca55b2602,Namespace:kube-system,Attempt:0,}" Dec 13 03:37:55.705331 systemd-networkd[1307]: cilium_host: Link UP Dec 13 03:37:55.706017 systemd-networkd[1307]: cilium_net: Link UP Dec 13 03:37:55.712705 systemd-networkd[1307]: cilium_net: Gained carrier Dec 13 03:37:55.719829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 03:37:55.719899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 03:37:55.720262 systemd-networkd[1307]: cilium_host: Gained carrier Dec 13 03:37:55.764638 systemd-networkd[1307]: cilium_vxlan: Link UP Dec 13 03:37:55.764642 systemd-networkd[1307]: cilium_vxlan: Gained carrier Dec 13 03:37:55.873455 systemd-networkd[1307]: cilium_host: Gained IPv6LL Dec 13 03:37:55.901416 kernel: NET: Registered PF_ALG protocol family Dec 13 03:37:56.433104 systemd-networkd[1307]: lxc_health: Link UP Dec 13 03:37:56.450333 systemd-networkd[1307]: lxc_health: Gained carrier Dec 13 03:37:56.450461 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:37:56.721504 systemd-networkd[1307]: cilium_net: Gained IPv6LL Dec 13 03:37:56.791487 systemd-networkd[1307]: lxc9b2a34bcafe2: Link UP Dec 13 03:37:56.833364 kernel: eth0: renamed from tmp968ee Dec 13 03:37:56.856436 kernel: eth0: renamed from tmp11517 Dec 13 03:37:56.889947 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:37:56.889990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc29ce7b91a4cf: link becomes ready Dec 13 03:37:56.890006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:37:56.904133 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9b2a34bcafe2: link becomes ready Dec 13 03:37:56.904225 systemd-networkd[1307]: tmp968ee: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:37:56.904315 systemd-networkd[1307]: tmp968ee: Cannot enable IPv6, ignoring: No such file or directory Dec 13 03:37:56.904348 systemd-networkd[1307]: tmp968ee: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Dec 13 03:37:56.904365 systemd-networkd[1307]: tmp968ee: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Dec 13 03:37:56.904376 systemd-networkd[1307]: tmp968ee: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Dec 13 03:37:56.904392 systemd-networkd[1307]: tmp968ee: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Dec 13 03:37:56.904702 systemd-networkd[1307]: lxc29ce7b91a4cf: Link UP Dec 13 03:37:56.905010 systemd-networkd[1307]: lxc29ce7b91a4cf: Gained carrier Dec 13 03:37:56.905123 systemd-networkd[1307]: lxc9b2a34bcafe2: Gained carrier Dec 13 03:37:57.553477 systemd-networkd[1307]: cilium_vxlan: Gained IPv6LL Dec 13 03:37:57.745510 systemd-networkd[1307]: lxc_health: Gained IPv6LL Dec 13 03:37:58.058060 kubelet[2597]: I1213 03:37:58.057986 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g44rp" podStartSLOduration=10.438048815 podStartE2EDuration="17.057972386s" podCreationTimestamp="2024-12-13 03:37:41 +0000 UTC" firstStartedPulling="2024-12-13 03:37:42.11400747 +0000 UTC m=+14.511936711" lastFinishedPulling="2024-12-13 03:37:48.733931044 +0000 UTC m=+21.131860282" observedRunningTime="2024-12-13 03:37:54.795988235 +0000 UTC m=+27.193917474" watchObservedRunningTime="2024-12-13 03:37:58.057972386 +0000 UTC m=+30.455901623" Dec 13 03:37:58.193466 systemd-networkd[1307]: lxc29ce7b91a4cf: Gained IPv6LL Dec 13 03:37:58.577480 systemd-networkd[1307]: lxc9b2a34bcafe2: Gained IPv6LL Dec 13 03:37:59.188284 env[1559]: time="2024-12-13T03:37:59.188247359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:59.188284 env[1559]: time="2024-12-13T03:37:59.188274553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:59.188284 env[1559]: time="2024-12-13T03:37:59.188284095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:59.188557 env[1559]: time="2024-12-13T03:37:59.188345975Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/115174d06638d75c74eb2e3042e9f1900b806985c9ec70b72f99dbaf9936ceb2 pid=4022 runtime=io.containerd.runc.v2 Dec 13 03:37:59.188692 env[1559]: time="2024-12-13T03:37:59.188665475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:37:59.188692 env[1559]: time="2024-12-13T03:37:59.188685515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:37:59.188746 env[1559]: time="2024-12-13T03:37:59.188692707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:37:59.188772 env[1559]: time="2024-12-13T03:37:59.188756672Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/968eedda8c10dd93171c362c2c869beff4e4d99a1194974dcaa5613c9395b250 pid=4026 runtime=io.containerd.runc.v2 Dec 13 03:37:59.196943 systemd[1]: Started cri-containerd-115174d06638d75c74eb2e3042e9f1900b806985c9ec70b72f99dbaf9936ceb2.scope. Dec 13 03:37:59.197681 systemd[1]: Started cri-containerd-968eedda8c10dd93171c362c2c869beff4e4d99a1194974dcaa5613c9395b250.scope. Dec 13 03:37:59.219483 env[1559]: time="2024-12-13T03:37:59.219453970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v7zpg,Uid:41a70788-bd3a-4b1f-ad6b-ff0ca55b2602,Namespace:kube-system,Attempt:0,} returns sandbox id \"115174d06638d75c74eb2e3042e9f1900b806985c9ec70b72f99dbaf9936ceb2\"" Dec 13 03:37:59.220080 env[1559]: time="2024-12-13T03:37:59.220056131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tk2ns,Uid:0f5646a8-e393-4137-a23d-11b3bca6dc64,Namespace:kube-system,Attempt:0,} returns sandbox id \"968eedda8c10dd93171c362c2c869beff4e4d99a1194974dcaa5613c9395b250\"" Dec 13 03:37:59.220842 env[1559]: time="2024-12-13T03:37:59.220824287Z" level=info msg="CreateContainer within sandbox \"115174d06638d75c74eb2e3042e9f1900b806985c9ec70b72f99dbaf9936ceb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 03:37:59.221138 env[1559]: time="2024-12-13T03:37:59.221123939Z" level=info msg="CreateContainer within sandbox \"968eedda8c10dd93171c362c2c869beff4e4d99a1194974dcaa5613c9395b250\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 03:37:59.226725 env[1559]: time="2024-12-13T03:37:59.226673515Z" level=info msg="CreateContainer within sandbox \"968eedda8c10dd93171c362c2c869beff4e4d99a1194974dcaa5613c9395b250\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88fb278fe8ffba2916b44d222cd04231a4a78eb9599a58b9845d4d4f6c5d8497\"" Dec 13 03:37:59.226927 env[1559]: time="2024-12-13T03:37:59.226885086Z" level=info msg="StartContainer for \"88fb278fe8ffba2916b44d222cd04231a4a78eb9599a58b9845d4d4f6c5d8497\"" Dec 13 03:37:59.227089 env[1559]: time="2024-12-13T03:37:59.227071296Z" level=info msg="CreateContainer within sandbox \"115174d06638d75c74eb2e3042e9f1900b806985c9ec70b72f99dbaf9936ceb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b6b1e1dc6403c00042b3bc6092aa556d81d9bc969f5af6f07518850662a6ae4\"" Dec 13 03:37:59.227247 env[1559]: time="2024-12-13T03:37:59.227232141Z" level=info msg="StartContainer for \"3b6b1e1dc6403c00042b3bc6092aa556d81d9bc969f5af6f07518850662a6ae4\"" Dec 13 03:37:59.228284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085685397.mount: Deactivated successfully. Dec 13 03:37:59.228337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582416270.mount: Deactivated successfully. Dec 13 03:37:59.273129 systemd[1]: Started cri-containerd-88fb278fe8ffba2916b44d222cd04231a4a78eb9599a58b9845d4d4f6c5d8497.scope. Dec 13 03:37:59.279917 systemd[1]: Started cri-containerd-3b6b1e1dc6403c00042b3bc6092aa556d81d9bc969f5af6f07518850662a6ae4.scope. Dec 13 03:37:59.313441 env[1559]: time="2024-12-13T03:37:59.313376673Z" level=info msg="StartContainer for \"88fb278fe8ffba2916b44d222cd04231a4a78eb9599a58b9845d4d4f6c5d8497\" returns successfully" Dec 13 03:37:59.331972 env[1559]: time="2024-12-13T03:37:59.331851757Z" level=info msg="StartContainer for \"3b6b1e1dc6403c00042b3bc6092aa556d81d9bc969f5af6f07518850662a6ae4\" returns successfully" Dec 13 03:37:59.812370 kubelet[2597]: I1213 03:37:59.812334 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v7zpg" podStartSLOduration=18.812324333 podStartE2EDuration="18.812324333s" podCreationTimestamp="2024-12-13 03:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:37:59.812034105 +0000 UTC m=+32.209963346" watchObservedRunningTime="2024-12-13 03:37:59.812324333 +0000 UTC m=+32.210253569" Dec 13 03:37:59.819978 kubelet[2597]: I1213 03:37:59.819945 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tk2ns" podStartSLOduration=18.81993389 podStartE2EDuration="18.81993389s" podCreationTimestamp="2024-12-13 03:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:37:59.819864201 +0000 UTC m=+32.217793442" watchObservedRunningTime="2024-12-13 03:37:59.81993389 +0000 UTC m=+32.217863127" Dec 13 03:41:56.421593 systemd[1]: Started sshd@8-147.75.202.71:22-218.92.0.223:35200.service. Dec 13 03:41:57.925332 sshd[4231]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 03:42:00.172909 sshd[4231]: Failed password for root from 218.92.0.223 port 35200 ssh2 Dec 13 03:42:04.705020 sshd[4231]: Failed password for root from 218.92.0.223 port 35200 ssh2 Dec 13 03:42:08.767144 sshd[4231]: Failed password for root from 218.92.0.223 port 35200 ssh2 Dec 13 03:42:09.287258 sshd[4231]: Received disconnect from 218.92.0.223 port 35200:11: [preauth] Dec 13 03:42:09.287258 sshd[4231]: Disconnected from authenticating user root 218.92.0.223 port 35200 [preauth] Dec 13 03:42:09.287860 sshd[4231]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 03:42:09.289984 systemd[1]: sshd@8-147.75.202.71:22-218.92.0.223:35200.service: Deactivated successfully. Dec 13 03:42:09.444036 systemd[1]: Started sshd@9-147.75.202.71:22-218.92.0.223:60852.service. Dec 13 03:42:10.481926 sshd[4239]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 03:42:12.513474 sshd[4239]: Failed password for root from 218.92.0.223 port 60852 ssh2 Dec 13 03:42:12.816398 sshd[4239]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 03:42:14.456221 sshd[4239]: Failed password for root from 218.92.0.223 port 60852 ssh2 Dec 13 03:42:16.534456 sshd[4239]: Failed password for root from 218.92.0.223 port 60852 ssh2 Dec 13 03:42:17.483765 sshd[4239]: Received disconnect from 218.92.0.223 port 60852:11: [preauth] Dec 13 03:42:17.483765 sshd[4239]: Disconnected from authenticating user root 218.92.0.223 port 60852 [preauth] Dec 13 03:42:17.484375 sshd[4239]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 03:42:17.486493 systemd[1]: sshd@9-147.75.202.71:22-218.92.0.223:60852.service: Deactivated successfully. Dec 13 03:42:17.601290 systemd[1]: Started sshd@10-147.75.202.71:22-218.92.0.223:14500.service. Dec 13 03:42:18.516716 sshd[4245]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 03:42:20.312532 sshd[4245]: Failed password for root from 218.92.0.223 port 14500 ssh2 Dec 13 03:42:23.567468 sshd[4245]: Failed password for root from 218.92.0.223 port 14500 ssh2 Dec 13 03:42:27.409573 sshd[4245]: Failed password for root from 218.92.0.223 port 14500 ssh2 Dec 13 03:42:27.632632 sshd[4245]: Received disconnect from 218.92.0.223 port 14500:11: [preauth] Dec 13 03:42:27.632632 sshd[4245]: Disconnected from authenticating user root 218.92.0.223 port 14500 [preauth] Dec 13 03:42:27.633192 sshd[4245]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.223 user=root Dec 13 03:42:27.635398 systemd[1]: sshd@10-147.75.202.71:22-218.92.0.223:14500.service: Deactivated successfully. Dec 13 03:42:50.986533 update_engine[1553]: I1213 03:42:50.986301 1553 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 03:42:50.986533 update_engine[1553]: I1213 03:42:50.986416 1553 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 03:42:50.988766 update_engine[1553]: I1213 03:42:50.988687 1553 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 03:42:50.989698 update_engine[1553]: I1213 03:42:50.989619 1553 omaha_request_params.cc:62] Current group set to lts Dec 13 03:42:50.989953 update_engine[1553]: I1213 03:42:50.989913 1553 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 03:42:50.989953 update_engine[1553]: I1213 03:42:50.989934 1553 update_attempter.cc:643] Scheduling an action processor start. Dec 13 03:42:50.990206 update_engine[1553]: I1213 03:42:50.989966 1553 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 03:42:50.990206 update_engine[1553]: I1213 03:42:50.990031 1553 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 03:42:50.990206 update_engine[1553]: I1213 03:42:50.990166 1553 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 03:42:50.990206 update_engine[1553]: I1213 03:42:50.990183 1553 omaha_request_action.cc:271] Request: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: Dec 13 03:42:50.990206 update_engine[1553]: I1213 03:42:50.990193 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:42:50.991381 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 03:42:50.993345 update_engine[1553]: I1213 03:42:50.993264 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:42:50.993580 update_engine[1553]: E1213 03:42:50.993520 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:42:50.993707 update_engine[1553]: I1213 03:42:50.993680 1553 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 03:43:00.992098 update_engine[1553]: I1213 03:43:00.991976 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:43:00.993114 update_engine[1553]: I1213 03:43:00.992563 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:43:00.993114 update_engine[1553]: E1213 03:43:00.992769 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:43:00.993114 update_engine[1553]: I1213 03:43:00.992944 1553 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 03:43:10.992543 update_engine[1553]: I1213 03:43:10.992424 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:43:10.993498 update_engine[1553]: I1213 03:43:10.992939 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:43:10.993498 update_engine[1553]: E1213 03:43:10.993144 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:43:10.993498 update_engine[1553]: I1213 03:43:10.993319 1553 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 03:43:20.991924 update_engine[1553]: I1213 03:43:20.991799 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:43:20.992882 update_engine[1553]: I1213 03:43:20.992312 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:43:20.992882 update_engine[1553]: E1213 03:43:20.992568 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:43:20.992882 update_engine[1553]: I1213 03:43:20.992722 1553 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 03:43:20.992882 update_engine[1553]: I1213 03:43:20.992738 1553 omaha_request_action.cc:621] Omaha request response: Dec 13 03:43:20.992882 update_engine[1553]: E1213 03:43:20.992882 1553 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.992910 1553 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.992920 1553 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.992929 1553 update_attempter.cc:306] Processing Done. Dec 13 03:43:20.993408 update_engine[1553]: E1213 03:43:20.992956 1553 update_attempter.cc:619] Update failed. Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.992966 1553 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.992975 1553 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.992985 1553 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.993135 1553 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.993185 1553 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.993196 1553 omaha_request_action.cc:271] Request: Dec 13 03:43:20.993408 update_engine[1553]: Dec 13 03:43:20.993408 update_engine[1553]: Dec 13 03:43:20.993408 update_engine[1553]: Dec 13 03:43:20.993408 update_engine[1553]: Dec 13 03:43:20.993408 update_engine[1553]: Dec 13 03:43:20.993408 update_engine[1553]: Dec 13 03:43:20.993408 update_engine[1553]: I1213 03:43:20.993206 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993566 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 03:43:20.995091 update_engine[1553]: E1213 03:43:20.993736 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993870 1553 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993885 1553 omaha_request_action.cc:621] Omaha request response: Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993896 1553 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993903 1553 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993911 1553 update_attempter.cc:306] Processing Done. Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993918 1553 update_attempter.cc:310] Error event sent. Dec 13 03:43:20.995091 update_engine[1553]: I1213 03:43:20.993940 1553 update_check_scheduler.cc:74] Next update check in 46m33s Dec 13 03:43:20.995918 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 03:43:20.995918 locksmithd[1593]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 03:43:31.507151 systemd[1]: Started sshd@11-147.75.202.71:22-139.178.68.195:43756.service. Dec 13 03:43:31.548606 sshd[4257]: Accepted publickey for core from 139.178.68.195 port 43756 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:31.549433 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:31.552156 systemd-logind[1551]: New session 10 of user core. Dec 13 03:43:31.552745 systemd[1]: Started session-10.scope. Dec 13 03:43:31.691302 sshd[4257]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:31.695746 systemd[1]: sshd@11-147.75.202.71:22-139.178.68.195:43756.service: Deactivated successfully. Dec 13 03:43:31.697106 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 03:43:31.698281 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Dec 13 03:43:31.700165 systemd-logind[1551]: Removed session 10. Dec 13 03:43:36.699679 systemd[1]: Started sshd@12-147.75.202.71:22-139.178.68.195:56536.service. Dec 13 03:43:36.736779 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 56536 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:36.737527 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:36.739852 systemd-logind[1551]: New session 11 of user core. Dec 13 03:43:36.740314 systemd[1]: Started session-11.scope. Dec 13 03:43:36.827025 sshd[4283]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:36.828547 systemd[1]: sshd@12-147.75.202.71:22-139.178.68.195:56536.service: Deactivated successfully. Dec 13 03:43:36.828986 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 03:43:36.829280 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Dec 13 03:43:36.829929 systemd-logind[1551]: Removed session 11. Dec 13 03:43:41.838838 systemd[1]: Started sshd@13-147.75.202.71:22-139.178.68.195:56540.service. Dec 13 03:43:41.879615 sshd[4312]: Accepted publickey for core from 139.178.68.195 port 56540 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:41.880452 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:41.883142 systemd-logind[1551]: New session 12 of user core. Dec 13 03:43:41.883749 systemd[1]: Started session-12.scope. Dec 13 03:43:41.967740 sshd[4312]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:41.969162 systemd[1]: sshd@13-147.75.202.71:22-139.178.68.195:56540.service: Deactivated successfully. Dec 13 03:43:41.969596 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 03:43:41.969992 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Dec 13 03:43:41.970540 systemd-logind[1551]: Removed session 12. Dec 13 03:43:46.977678 systemd[1]: Started sshd@14-147.75.202.71:22-139.178.68.195:52870.service. Dec 13 03:43:47.015179 sshd[4338]: Accepted publickey for core from 139.178.68.195 port 52870 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:47.015955 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:47.018294 systemd-logind[1551]: New session 13 of user core. Dec 13 03:43:47.018789 systemd[1]: Started session-13.scope. Dec 13 03:43:47.103912 sshd[4338]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:47.105892 systemd[1]: sshd@14-147.75.202.71:22-139.178.68.195:52870.service: Deactivated successfully. Dec 13 03:43:47.106275 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 03:43:47.106697 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Dec 13 03:43:47.107311 systemd[1]: Started sshd@15-147.75.202.71:22-139.178.68.195:52882.service. Dec 13 03:43:47.107782 systemd-logind[1551]: Removed session 13. Dec 13 03:43:47.146550 sshd[4364]: Accepted publickey for core from 139.178.68.195 port 52882 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:47.147553 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:47.150637 systemd-logind[1551]: New session 14 of user core. Dec 13 03:43:47.151377 systemd[1]: Started session-14.scope. Dec 13 03:43:47.254416 sshd[4364]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:47.256153 systemd[1]: sshd@15-147.75.202.71:22-139.178.68.195:52882.service: Deactivated successfully. Dec 13 03:43:47.256526 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 03:43:47.256943 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Dec 13 03:43:47.257555 systemd[1]: Started sshd@16-147.75.202.71:22-139.178.68.195:52898.service. Dec 13 03:43:47.257996 systemd-logind[1551]: Removed session 14. Dec 13 03:43:47.294349 sshd[4388]: Accepted publickey for core from 139.178.68.195 port 52898 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:47.295243 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:47.297799 systemd-logind[1551]: New session 15 of user core. Dec 13 03:43:47.298342 systemd[1]: Started session-15.scope. Dec 13 03:43:47.441003 sshd[4388]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:47.442770 systemd[1]: sshd@16-147.75.202.71:22-139.178.68.195:52898.service: Deactivated successfully. Dec 13 03:43:47.443280 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 03:43:47.443710 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Dec 13 03:43:47.444222 systemd-logind[1551]: Removed session 15. Dec 13 03:43:52.444571 systemd[1]: Started sshd@17-147.75.202.71:22-139.178.68.195:52912.service. Dec 13 03:43:52.482907 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 52912 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:52.483757 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:52.486752 systemd-logind[1551]: New session 16 of user core. Dec 13 03:43:52.487338 systemd[1]: Started session-16.scope. Dec 13 03:43:52.570698 sshd[4414]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:52.572150 systemd[1]: sshd@17-147.75.202.71:22-139.178.68.195:52912.service: Deactivated successfully. Dec 13 03:43:52.572617 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 03:43:52.573064 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Dec 13 03:43:52.573601 systemd-logind[1551]: Removed session 16. Dec 13 03:43:57.580575 systemd[1]: Started sshd@18-147.75.202.71:22-139.178.68.195:51188.service. Dec 13 03:43:57.617715 sshd[4441]: Accepted publickey for core from 139.178.68.195 port 51188 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:57.618409 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:57.620741 systemd-logind[1551]: New session 17 of user core. Dec 13 03:43:57.621182 systemd[1]: Started session-17.scope. Dec 13 03:43:57.704789 sshd[4441]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:57.706485 systemd[1]: sshd@18-147.75.202.71:22-139.178.68.195:51188.service: Deactivated successfully. Dec 13 03:43:57.706802 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 03:43:57.707142 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Dec 13 03:43:57.707667 systemd[1]: Started sshd@19-147.75.202.71:22-139.178.68.195:51198.service. Dec 13 03:43:57.708115 systemd-logind[1551]: Removed session 17. Dec 13 03:43:57.744722 sshd[4466]: Accepted publickey for core from 139.178.68.195 port 51198 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:57.745675 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:57.748727 systemd-logind[1551]: New session 18 of user core. Dec 13 03:43:57.749409 systemd[1]: Started session-18.scope. Dec 13 03:43:57.951267 sshd[4466]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:57.958680 systemd[1]: sshd@19-147.75.202.71:22-139.178.68.195:51198.service: Deactivated successfully. Dec 13 03:43:57.959062 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 03:43:57.959427 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Dec 13 03:43:57.960031 systemd[1]: Started sshd@20-147.75.202.71:22-139.178.68.195:51210.service. Dec 13 03:43:57.960516 systemd-logind[1551]: Removed session 18. Dec 13 03:43:57.997629 sshd[4489]: Accepted publickey for core from 139.178.68.195 port 51210 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:57.998589 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:58.001704 systemd-logind[1551]: New session 19 of user core. Dec 13 03:43:58.002705 systemd[1]: Started session-19.scope. Dec 13 03:43:59.215002 sshd[4489]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:59.217101 systemd[1]: sshd@20-147.75.202.71:22-139.178.68.195:51210.service: Deactivated successfully. Dec 13 03:43:59.217627 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 03:43:59.218055 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Dec 13 03:43:59.218914 systemd[1]: Started sshd@21-147.75.202.71:22-139.178.68.195:51218.service. Dec 13 03:43:59.219448 systemd-logind[1551]: Removed session 19. Dec 13 03:43:59.258677 sshd[4519]: Accepted publickey for core from 139.178.68.195 port 51218 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:59.259764 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:59.262802 systemd-logind[1551]: New session 20 of user core. Dec 13 03:43:59.263817 systemd[1]: Started session-20.scope. Dec 13 03:43:59.447326 sshd[4519]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:59.449315 systemd[1]: sshd@21-147.75.202.71:22-139.178.68.195:51218.service: Deactivated successfully. Dec 13 03:43:59.449678 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 03:43:59.450013 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Dec 13 03:43:59.450610 systemd[1]: Started sshd@22-147.75.202.71:22-139.178.68.195:51224.service. Dec 13 03:43:59.450962 systemd-logind[1551]: Removed session 20. Dec 13 03:43:59.487970 sshd[4547]: Accepted publickey for core from 139.178.68.195 port 51224 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:59.488907 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:59.492084 systemd-logind[1551]: New session 21 of user core. Dec 13 03:43:59.492776 systemd[1]: Started session-21.scope. Dec 13 03:43:59.618006 sshd[4547]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:59.619518 systemd[1]: sshd@22-147.75.202.71:22-139.178.68.195:51224.service: Deactivated successfully. Dec 13 03:43:59.619972 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 03:43:59.620301 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. Dec 13 03:43:59.620796 systemd-logind[1551]: Removed session 21. Dec 13 03:44:04.627869 systemd[1]: Started sshd@23-147.75.202.71:22-139.178.68.195:51226.service. Dec 13 03:44:04.673834 sshd[4576]: Accepted publickey for core from 139.178.68.195 port 51226 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:04.674572 sshd[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:04.676826 systemd-logind[1551]: New session 22 of user core. Dec 13 03:44:04.677619 systemd[1]: Started session-22.scope. Dec 13 03:44:04.760702 sshd[4576]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:04.762066 systemd[1]: sshd@23-147.75.202.71:22-139.178.68.195:51226.service: Deactivated successfully. Dec 13 03:44:04.762507 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 03:44:04.762909 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. Dec 13 03:44:04.763321 systemd-logind[1551]: Removed session 22. Dec 13 03:44:09.763688 systemd[1]: Started sshd@24-147.75.202.71:22-139.178.68.195:54912.service. Dec 13 03:44:09.803626 sshd[4601]: Accepted publickey for core from 139.178.68.195 port 54912 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:09.804949 sshd[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:09.808438 systemd-logind[1551]: New session 23 of user core. Dec 13 03:44:09.809272 systemd[1]: Started session-23.scope. Dec 13 03:44:09.898885 sshd[4601]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:09.900508 systemd[1]: sshd@24-147.75.202.71:22-139.178.68.195:54912.service: Deactivated successfully. Dec 13 03:44:09.900953 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 03:44:09.901278 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. Dec 13 03:44:09.901945 systemd-logind[1551]: Removed session 23. Dec 13 03:44:14.907962 systemd[1]: Started sshd@25-147.75.202.71:22-139.178.68.195:54924.service. Dec 13 03:44:14.944728 sshd[4628]: Accepted publickey for core from 139.178.68.195 port 54924 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:14.945523 sshd[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:14.947790 systemd-logind[1551]: New session 24 of user core. Dec 13 03:44:14.948357 systemd[1]: Started session-24.scope. Dec 13 03:44:15.037274 sshd[4628]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:15.038729 systemd[1]: sshd@25-147.75.202.71:22-139.178.68.195:54924.service: Deactivated successfully. Dec 13 03:44:15.039152 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 03:44:15.039486 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. Dec 13 03:44:15.039947 systemd-logind[1551]: Removed session 24. Dec 13 03:44:20.047233 systemd[1]: Started sshd@26-147.75.202.71:22-139.178.68.195:55764.service. Dec 13 03:44:20.084220 sshd[4651]: Accepted publickey for core from 139.178.68.195 port 55764 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:20.085179 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:20.088275 systemd-logind[1551]: New session 25 of user core. Dec 13 03:44:20.089156 systemd[1]: Started session-25.scope. Dec 13 03:44:20.172538 sshd[4651]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:20.174398 systemd[1]: sshd@26-147.75.202.71:22-139.178.68.195:55764.service: Deactivated successfully. Dec 13 03:44:20.174755 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 03:44:20.175091 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. Dec 13 03:44:20.175770 systemd[1]: Started sshd@27-147.75.202.71:22-139.178.68.195:55778.service. Dec 13 03:44:20.176205 systemd-logind[1551]: Removed session 25. Dec 13 03:44:20.212780 sshd[4673]: Accepted publickey for core from 139.178.68.195 port 55778 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:20.213630 sshd[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:20.216577 systemd-logind[1551]: New session 26 of user core. Dec 13 03:44:20.217294 systemd[1]: Started session-26.scope. Dec 13 03:44:21.597695 env[1559]: time="2024-12-13T03:44:21.597597443Z" level=info msg="StopContainer for \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\" with timeout 30 (s)" Dec 13 03:44:21.598773 env[1559]: time="2024-12-13T03:44:21.598281338Z" level=info msg="Stop container \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\" with signal terminated" Dec 13 03:44:21.622680 systemd[1]: cri-containerd-0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe.scope: Deactivated successfully. Dec 13 03:44:21.636981 env[1559]: time="2024-12-13T03:44:21.636928316Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:44:21.640081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe-rootfs.mount: Deactivated successfully. Dec 13 03:44:21.641722 env[1559]: time="2024-12-13T03:44:21.641698202Z" level=info msg="StopContainer for \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\" with timeout 2 (s)" Dec 13 03:44:21.641867 env[1559]: time="2024-12-13T03:44:21.641849284Z" level=info msg="Stop container \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\" with signal terminated" Dec 13 03:44:21.646712 systemd-networkd[1307]: lxc_health: Link DOWN Dec 13 03:44:21.646716 systemd-networkd[1307]: lxc_health: Lost carrier Dec 13 03:44:21.667115 env[1559]: time="2024-12-13T03:44:21.667081022Z" level=info msg="shim disconnected" id=0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe Dec 13 03:44:21.667217 env[1559]: time="2024-12-13T03:44:21.667117884Z" level=warning msg="cleaning up after shim disconnected" id=0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe namespace=k8s.io Dec 13 03:44:21.667217 env[1559]: time="2024-12-13T03:44:21.667132467Z" level=info msg="cleaning up dead shim" Dec 13 03:44:21.671901 env[1559]: time="2024-12-13T03:44:21.671878811Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4738 runtime=io.containerd.runc.v2\n" Dec 13 03:44:21.672921 env[1559]: time="2024-12-13T03:44:21.672898134Z" level=info msg="StopContainer for \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\" returns successfully" Dec 13 03:44:21.673379 env[1559]: time="2024-12-13T03:44:21.673319490Z" level=info msg="StopPodSandbox for \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\"" Dec 13 03:44:21.673379 env[1559]: time="2024-12-13T03:44:21.673373585Z" level=info msg="Container to stop \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:21.675015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f-shm.mount: Deactivated successfully. Dec 13 03:44:21.677705 systemd[1]: cri-containerd-77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f.scope: Deactivated successfully. Dec 13 03:44:21.689319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f-rootfs.mount: Deactivated successfully. Dec 13 03:44:21.722661 env[1559]: time="2024-12-13T03:44:21.722526223Z" level=info msg="shim disconnected" id=77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f Dec 13 03:44:21.723156 env[1559]: time="2024-12-13T03:44:21.722652384Z" level=warning msg="cleaning up after shim disconnected" id=77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f namespace=k8s.io Dec 13 03:44:21.723156 env[1559]: time="2024-12-13T03:44:21.722699156Z" level=info msg="cleaning up dead shim" Dec 13 03:44:21.738221 systemd[1]: cri-containerd-a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d.scope: Deactivated successfully. Dec 13 03:44:21.738917 systemd[1]: cri-containerd-a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d.scope: Consumed 6.302s CPU time. Dec 13 03:44:21.740112 env[1559]: time="2024-12-13T03:44:21.740023597Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4771 runtime=io.containerd.runc.v2\n" Dec 13 03:44:21.740986 env[1559]: time="2024-12-13T03:44:21.740867166Z" level=info msg="TearDown network for sandbox \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" successfully" Dec 13 03:44:21.740986 env[1559]: time="2024-12-13T03:44:21.740936869Z" level=info msg="StopPodSandbox for \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" returns successfully" Dec 13 03:44:21.773289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d-rootfs.mount: Deactivated successfully. Dec 13 03:44:21.773595 env[1559]: time="2024-12-13T03:44:21.773325362Z" level=info msg="shim disconnected" id=a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d Dec 13 03:44:21.773595 env[1559]: time="2024-12-13T03:44:21.773433924Z" level=warning msg="cleaning up after shim disconnected" id=a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d namespace=k8s.io Dec 13 03:44:21.773595 env[1559]: time="2024-12-13T03:44:21.773463076Z" level=info msg="cleaning up dead shim" Dec 13 03:44:21.782615 env[1559]: time="2024-12-13T03:44:21.782539725Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4796 runtime=io.containerd.runc.v2\n" Dec 13 03:44:21.784087 env[1559]: time="2024-12-13T03:44:21.784014295Z" level=info msg="StopContainer for \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\" returns successfully" Dec 13 03:44:21.784635 env[1559]: time="2024-12-13T03:44:21.784561946Z" level=info msg="StopPodSandbox for \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\"" Dec 13 03:44:21.784749 env[1559]: time="2024-12-13T03:44:21.784637795Z" level=info msg="Container to stop \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:21.784749 env[1559]: time="2024-12-13T03:44:21.784662532Z" level=info msg="Container to stop \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:21.784749 env[1559]: time="2024-12-13T03:44:21.784680066Z" level=info msg="Container to stop \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:21.784749 env[1559]: time="2024-12-13T03:44:21.784696244Z" level=info msg="Container to stop \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:21.784749 env[1559]: time="2024-12-13T03:44:21.784711646Z" level=info msg="Container to stop \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:21.791820 systemd[1]: cri-containerd-27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0.scope: Deactivated successfully. Dec 13 03:44:21.840405 env[1559]: time="2024-12-13T03:44:21.840263940Z" level=info msg="shim disconnected" id=27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0 Dec 13 03:44:21.840801 env[1559]: time="2024-12-13T03:44:21.840410501Z" level=warning msg="cleaning up after shim disconnected" id=27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0 namespace=k8s.io Dec 13 03:44:21.840801 env[1559]: time="2024-12-13T03:44:21.840449890Z" level=info msg="cleaning up dead shim" Dec 13 03:44:21.850606 kubelet[2597]: I1213 03:44:21.850403 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwx9w\" (UniqueName: \"kubernetes.io/projected/29e294f7-7c1f-4f44-866d-d009b881d081-kube-api-access-dwx9w\") pod \"29e294f7-7c1f-4f44-866d-d009b881d081\" (UID: \"29e294f7-7c1f-4f44-866d-d009b881d081\") " Dec 13 03:44:21.850606 kubelet[2597]: I1213 03:44:21.850503 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29e294f7-7c1f-4f44-866d-d009b881d081-cilium-config-path\") pod \"29e294f7-7c1f-4f44-866d-d009b881d081\" (UID: \"29e294f7-7c1f-4f44-866d-d009b881d081\") " Dec 13 03:44:21.856090 kubelet[2597]: I1213 03:44:21.855985 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e294f7-7c1f-4f44-866d-d009b881d081-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29e294f7-7c1f-4f44-866d-d009b881d081" (UID: "29e294f7-7c1f-4f44-866d-d009b881d081"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:21.857308 kubelet[2597]: I1213 03:44:21.857189 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e294f7-7c1f-4f44-866d-d009b881d081-kube-api-access-dwx9w" (OuterVolumeSpecName: "kube-api-access-dwx9w") pod "29e294f7-7c1f-4f44-866d-d009b881d081" (UID: "29e294f7-7c1f-4f44-866d-d009b881d081"). InnerVolumeSpecName "kube-api-access-dwx9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:21.858481 env[1559]: time="2024-12-13T03:44:21.858383557Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4826 runtime=io.containerd.runc.v2\n" Dec 13 03:44:21.859193 env[1559]: time="2024-12-13T03:44:21.859066711Z" level=info msg="TearDown network for sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" successfully" Dec 13 03:44:21.859193 env[1559]: time="2024-12-13T03:44:21.859130072Z" level=info msg="StopPodSandbox for \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" returns successfully" Dec 13 03:44:21.912371 kubelet[2597]: I1213 03:44:21.912245 2597 scope.go:117] "RemoveContainer" containerID="a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d" Dec 13 03:44:21.915250 env[1559]: time="2024-12-13T03:44:21.915125882Z" level=info msg="RemoveContainer for \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\"" Dec 13 03:44:21.919832 env[1559]: time="2024-12-13T03:44:21.919730477Z" level=info msg="RemoveContainer for \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\" returns successfully" Dec 13 03:44:21.920294 kubelet[2597]: I1213 03:44:21.920200 2597 scope.go:117] "RemoveContainer" containerID="863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9" Dec 13 03:44:21.922858 env[1559]: time="2024-12-13T03:44:21.922745984Z" level=info msg="RemoveContainer for \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\"" Dec 13 03:44:21.924897 systemd[1]: Removed slice kubepods-besteffort-pod29e294f7_7c1f_4f44_866d_d009b881d081.slice. Dec 13 03:44:21.925170 systemd[1]: kubepods-besteffort-pod29e294f7_7c1f_4f44_866d_d009b881d081.slice: Consumed 1.009s CPU time. Dec 13 03:44:21.927189 env[1559]: time="2024-12-13T03:44:21.927075047Z" level=info msg="RemoveContainer for \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\" returns successfully" Dec 13 03:44:21.927504 kubelet[2597]: I1213 03:44:21.927457 2597 scope.go:117] "RemoveContainer" containerID="abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934" Dec 13 03:44:21.930370 env[1559]: time="2024-12-13T03:44:21.929946173Z" level=info msg="RemoveContainer for \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\"" Dec 13 03:44:21.934089 env[1559]: time="2024-12-13T03:44:21.933986188Z" level=info msg="RemoveContainer for \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\" returns successfully" Dec 13 03:44:21.934424 kubelet[2597]: I1213 03:44:21.934306 2597 scope.go:117] "RemoveContainer" containerID="bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2" Dec 13 03:44:21.937149 env[1559]: time="2024-12-13T03:44:21.937042536Z" level=info msg="RemoveContainer for \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\"" Dec 13 03:44:21.941430 env[1559]: time="2024-12-13T03:44:21.941319804Z" level=info msg="RemoveContainer for \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\" returns successfully" Dec 13 03:44:21.941812 kubelet[2597]: I1213 03:44:21.941724 2597 scope.go:117] "RemoveContainer" containerID="db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13" Dec 13 03:44:21.944425 env[1559]: time="2024-12-13T03:44:21.944310812Z" level=info msg="RemoveContainer for \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\"" Dec 13 03:44:21.948395 env[1559]: time="2024-12-13T03:44:21.948315228Z" level=info msg="RemoveContainer for \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\" returns successfully" Dec 13 03:44:21.948698 kubelet[2597]: I1213 03:44:21.948655 2597 scope.go:117] "RemoveContainer" containerID="a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d" Dec 13 03:44:21.949238 env[1559]: time="2024-12-13T03:44:21.949059334Z" level=error msg="ContainerStatus for \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\": not found" Dec 13 03:44:21.949602 kubelet[2597]: E1213 03:44:21.949548 2597 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\": not found" containerID="a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d" Dec 13 03:44:21.949770 kubelet[2597]: I1213 03:44:21.949619 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d"} err="failed to get container status \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9697f304c542fbb28b3659243d71d5da559159cdc8e3ac7844e8a703572b97d\": not found" Dec 13 03:44:21.949914 kubelet[2597]: I1213 03:44:21.949777 2597 scope.go:117] "RemoveContainer" containerID="863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9" Dec 13 03:44:21.950333 env[1559]: time="2024-12-13T03:44:21.950202760Z" level=error msg="ContainerStatus for \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\": not found" Dec 13 03:44:21.950639 kubelet[2597]: E1213 03:44:21.950582 2597 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\": not found" containerID="863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9" Dec 13 03:44:21.950795 kubelet[2597]: I1213 03:44:21.950655 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9"} err="failed to get container status \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"863e5adffa4e6d41429b2cff0ff4fa3b3863f4d1c4522eba46efcecccf5123f9\": not found" Dec 13 03:44:21.950795 kubelet[2597]: I1213 03:44:21.950701 2597 scope.go:117] "RemoveContainer" containerID="abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934" Dec 13 03:44:21.951052 kubelet[2597]: I1213 03:44:21.950780 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-xtables-lock\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.951052 kubelet[2597]: I1213 03:44:21.950863 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-bpf-maps\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.951052 kubelet[2597]: I1213 03:44:21.950911 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hostproc\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.951052 kubelet[2597]: I1213 03:44:21.950893 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.951052 kubelet[2597]: I1213 03:44:21.950958 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-cgroup\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.951052 kubelet[2597]: I1213 03:44:21.950983 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.951850 kubelet[2597]: I1213 03:44:21.951021 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-clustermesh-secrets\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.951850 kubelet[2597]: I1213 03:44:21.951010 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hostproc" (OuterVolumeSpecName: "hostproc") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.951850 kubelet[2597]: I1213 03:44:21.951075 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hubble-tls\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.951850 kubelet[2597]: I1213 03:44:21.951073 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.951850 kubelet[2597]: I1213 03:44:21.951123 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-etc-cni-netd\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.952545 env[1559]: time="2024-12-13T03:44:21.951166144Z" level=error msg="ContainerStatus for \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\": not found" Dec 13 03:44:21.952708 kubelet[2597]: I1213 03:44:21.951168 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-kernel\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.952708 kubelet[2597]: I1213 03:44:21.951194 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.952708 kubelet[2597]: I1213 03:44:21.951213 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-net\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.952708 kubelet[2597]: I1213 03:44:21.951266 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq2j5\" (UniqueName: \"kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-kube-api-access-rq2j5\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.952708 kubelet[2597]: I1213 03:44:21.951279 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.953287 env[1559]: time="2024-12-13T03:44:21.952396805Z" level=error msg="ContainerStatus for \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\": not found" Dec 13 03:44:21.953471 kubelet[2597]: I1213 03:44:21.951316 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-config-path\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.953471 kubelet[2597]: I1213 03:44:21.951317 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.953471 kubelet[2597]: I1213 03:44:21.951389 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cni-path\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.953471 kubelet[2597]: I1213 03:44:21.951442 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-lib-modules\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.953471 kubelet[2597]: I1213 03:44:21.951493 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-run\") pod \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\" (UID: \"7b4a6e13-1afa-4f37-bb31-9277ff4ed174\") " Dec 13 03:44:21.953471 kubelet[2597]: E1213 03:44:21.951564 2597 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\": not found" containerID="abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934" Dec 13 03:44:21.954165 kubelet[2597]: I1213 03:44:21.951576 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cni-path" (OuterVolumeSpecName: "cni-path") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.954165 kubelet[2597]: I1213 03:44:21.951635 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.954165 kubelet[2597]: I1213 03:44:21.951656 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:21.954165 kubelet[2597]: I1213 03:44:21.951579 2597 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-etc-cni-netd\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.954165 kubelet[2597]: I1213 03:44:21.951772 2597 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-net\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.954775 env[1559]: time="2024-12-13T03:44:21.953556117Z" level=error msg="ContainerStatus for \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\": not found" Dec 13 03:44:21.954932 kubelet[2597]: I1213 03:44:21.951628 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934"} err="failed to get container status \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\": rpc error: code = NotFound desc = an error occurred when try to find container \"abc32e34a904ddca53fea09cf67f87531cc18a532ab1b637b6a57821d081c934\": not found" Dec 13 03:44:21.954932 kubelet[2597]: I1213 03:44:21.951843 2597 scope.go:117] "RemoveContainer" containerID="bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2" Dec 13 03:44:21.954932 kubelet[2597]: I1213 03:44:21.951810 2597 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.954932 kubelet[2597]: I1213 03:44:21.951989 2597 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dwx9w\" (UniqueName: \"kubernetes.io/projected/29e294f7-7c1f-4f44-866d-d009b881d081-kube-api-access-dwx9w\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.954932 kubelet[2597]: I1213 03:44:21.952054 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29e294f7-7c1f-4f44-866d-d009b881d081-cilium-config-path\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.954932 kubelet[2597]: I1213 03:44:21.952090 2597 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-bpf-maps\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.954932 kubelet[2597]: I1213 03:44:21.952117 2597 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hostproc\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.955727 kubelet[2597]: I1213 03:44:21.952142 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-cgroup\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.955727 kubelet[2597]: I1213 03:44:21.952169 2597 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-xtables-lock\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:21.955727 kubelet[2597]: E1213 03:44:21.952946 2597 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\": not found" containerID="bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2" Dec 13 03:44:21.955727 kubelet[2597]: I1213 03:44:21.953013 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2"} err="failed to get container status \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd130c95a5c30d16ab85da759476135ddba7f85f888653981d9b6e6b81e3e9a2\": not found" Dec 13 03:44:21.955727 kubelet[2597]: I1213 03:44:21.953065 2597 scope.go:117] "RemoveContainer" containerID="db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13" Dec 13 03:44:21.955727 kubelet[2597]: E1213 03:44:21.953991 2597 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\": not found" containerID="db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13" Dec 13 03:44:21.956563 kubelet[2597]: I1213 03:44:21.954071 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13"} err="failed to get container status \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\": rpc error: code = NotFound desc = an error occurred when try to find container \"db119552b30a4a4e1c93d49812924c66ca56f21065b67ccf0308099400609a13\": not found" Dec 13 03:44:21.956563 kubelet[2597]: I1213 03:44:21.954122 2597 scope.go:117] "RemoveContainer" containerID="0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe" Dec 13 03:44:21.957012 env[1559]: time="2024-12-13T03:44:21.956925979Z" level=info msg="RemoveContainer for \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\"" Dec 13 03:44:21.957190 kubelet[2597]: I1213 03:44:21.957065 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:21.958303 kubelet[2597]: I1213 03:44:21.958243 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:21.958604 kubelet[2597]: I1213 03:44:21.958541 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:21.959431 kubelet[2597]: I1213 03:44:21.959312 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-kube-api-access-rq2j5" (OuterVolumeSpecName: "kube-api-access-rq2j5") pod "7b4a6e13-1afa-4f37-bb31-9277ff4ed174" (UID: "7b4a6e13-1afa-4f37-bb31-9277ff4ed174"). InnerVolumeSpecName "kube-api-access-rq2j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:21.961483 env[1559]: time="2024-12-13T03:44:21.961417723Z" level=info msg="RemoveContainer for \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\" returns successfully" Dec 13 03:44:21.961869 kubelet[2597]: I1213 03:44:21.961773 2597 scope.go:117] "RemoveContainer" containerID="0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe" Dec 13 03:44:21.962372 env[1559]: time="2024-12-13T03:44:21.962231739Z" level=error msg="ContainerStatus for \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\": not found" Dec 13 03:44:21.962718 kubelet[2597]: E1213 03:44:21.962659 2597 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\": not found" containerID="0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe" Dec 13 03:44:21.962901 kubelet[2597]: I1213 03:44:21.962733 2597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe"} err="failed to get container status \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ef7c83ffcac42f8954243914c2fbdc5608ccd6d33b20f5a78f1e4cfbe178afe\": not found" Dec 13 03:44:22.053136 kubelet[2597]: I1213 03:44:22.053020 2597 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-clustermesh-secrets\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:22.053136 kubelet[2597]: I1213 03:44:22.053087 2597 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-hubble-tls\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:22.053136 kubelet[2597]: I1213 03:44:22.053119 2597 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rq2j5\" (UniqueName: \"kubernetes.io/projected/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-kube-api-access-rq2j5\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:22.053136 kubelet[2597]: I1213 03:44:22.053150 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-config-path\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:22.053835 kubelet[2597]: I1213 03:44:22.053181 2597 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cni-path\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:22.053835 kubelet[2597]: I1213 03:44:22.053208 2597 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-lib-modules\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:22.053835 kubelet[2597]: I1213 03:44:22.053235 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b4a6e13-1afa-4f37-bb31-9277ff4ed174-cilium-run\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:22.222516 systemd[1]: Removed slice kubepods-burstable-pod7b4a6e13_1afa_4f37_bb31_9277ff4ed174.slice. Dec 13 03:44:22.222780 systemd[1]: kubepods-burstable-pod7b4a6e13_1afa_4f37_bb31_9277ff4ed174.slice: Consumed 6.377s CPU time. Dec 13 03:44:22.621666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0-rootfs.mount: Deactivated successfully. Dec 13 03:44:22.621952 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0-shm.mount: Deactivated successfully. Dec 13 03:44:22.621989 systemd[1]: var-lib-kubelet-pods-7b4a6e13\x2d1afa\x2d4f37\x2dbb31\x2d9277ff4ed174-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drq2j5.mount: Deactivated successfully. Dec 13 03:44:22.622022 systemd[1]: var-lib-kubelet-pods-29e294f7\x2d7c1f\x2d4f44\x2d866d\x2dd009b881d081-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddwx9w.mount: Deactivated successfully. Dec 13 03:44:22.622052 systemd[1]: var-lib-kubelet-pods-7b4a6e13\x2d1afa\x2d4f37\x2dbb31\x2d9277ff4ed174-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:44:22.622083 systemd[1]: var-lib-kubelet-pods-7b4a6e13\x2d1afa\x2d4f37\x2dbb31\x2d9277ff4ed174-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:44:22.839752 kubelet[2597]: E1213 03:44:22.839663 2597 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:44:23.539643 sshd[4673]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:23.547061 systemd[1]: sshd@27-147.75.202.71:22-139.178.68.195:55778.service: Deactivated successfully. Dec 13 03:44:23.547881 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 03:44:23.548231 systemd-logind[1551]: Session 26 logged out. Waiting for processes to exit. Dec 13 03:44:23.548888 systemd[1]: Started sshd@28-147.75.202.71:22-139.178.68.195:55780.service. Dec 13 03:44:23.549319 systemd-logind[1551]: Removed session 26. Dec 13 03:44:23.586778 sshd[4844]: Accepted publickey for core from 139.178.68.195 port 55780 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:23.587715 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:23.590925 systemd-logind[1551]: New session 27 of user core. Dec 13 03:44:23.591643 systemd[1]: Started session-27.scope. Dec 13 03:44:23.699447 kubelet[2597]: I1213 03:44:23.699427 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e294f7-7c1f-4f44-866d-d009b881d081" path="/var/lib/kubelet/pods/29e294f7-7c1f-4f44-866d-d009b881d081/volumes" Dec 13 03:44:23.699654 kubelet[2597]: I1213 03:44:23.699648 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" path="/var/lib/kubelet/pods/7b4a6e13-1afa-4f37-bb31-9277ff4ed174/volumes" Dec 13 03:44:23.889746 sshd[4844]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:23.891794 systemd[1]: sshd@28-147.75.202.71:22-139.178.68.195:55780.service: Deactivated successfully. Dec 13 03:44:23.892225 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 03:44:23.893913 systemd[1]: Started sshd@29-147.75.202.71:22-139.178.68.195:55794.service. Dec 13 03:44:23.901650 systemd-logind[1551]: Session 27 logged out. Waiting for processes to exit. Dec 13 03:44:23.902138 systemd-logind[1551]: Removed session 27. Dec 13 03:44:23.905219 kubelet[2597]: I1213 03:44:23.905190 2597 topology_manager.go:215] "Topology Admit Handler" podUID="d39057d3-01c1-4ab9-bfd5-4304f7793b4b" podNamespace="kube-system" podName="cilium-kv287" Dec 13 03:44:23.905339 kubelet[2597]: E1213 03:44:23.905237 2597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" containerName="mount-cgroup" Dec 13 03:44:23.905339 kubelet[2597]: E1213 03:44:23.905245 2597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29e294f7-7c1f-4f44-866d-d009b881d081" containerName="cilium-operator" Dec 13 03:44:23.905339 kubelet[2597]: E1213 03:44:23.905251 2597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" containerName="clean-cilium-state" Dec 13 03:44:23.905339 kubelet[2597]: E1213 03:44:23.905257 2597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" containerName="cilium-agent" Dec 13 03:44:23.905339 kubelet[2597]: E1213 03:44:23.905262 2597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" containerName="apply-sysctl-overwrites" Dec 13 03:44:23.905339 kubelet[2597]: E1213 03:44:23.905267 2597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" containerName="mount-bpf-fs" Dec 13 03:44:23.905339 kubelet[2597]: I1213 03:44:23.905284 2597 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e294f7-7c1f-4f44-866d-d009b881d081" containerName="cilium-operator" Dec 13 03:44:23.905339 kubelet[2597]: I1213 03:44:23.905289 2597 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4a6e13-1afa-4f37-bb31-9277ff4ed174" containerName="cilium-agent" Dec 13 03:44:23.908800 systemd[1]: Created slice kubepods-burstable-podd39057d3_01c1_4ab9_bfd5_4304f7793b4b.slice. Dec 13 03:44:23.934716 sshd[4867]: Accepted publickey for core from 139.178.68.195 port 55794 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:23.935499 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:23.938018 systemd-logind[1551]: New session 28 of user core. Dec 13 03:44:23.938539 systemd[1]: Started session-28.scope. Dec 13 03:44:23.966138 kubelet[2597]: I1213 03:44:23.966092 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-cgroup\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966138 kubelet[2597]: I1213 03:44:23.966113 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hubble-tls\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966138 kubelet[2597]: I1213 03:44:23.966124 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-run\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966138 kubelet[2597]: I1213 03:44:23.966135 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-etc-cni-netd\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966263 kubelet[2597]: I1213 03:44:23.966146 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-net\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966263 kubelet[2597]: I1213 03:44:23.966156 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w56dl\" (UniqueName: \"kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-kube-api-access-w56dl\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966263 kubelet[2597]: I1213 03:44:23.966167 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-kernel\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966263 kubelet[2597]: I1213 03:44:23.966178 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-clustermesh-secrets\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966263 kubelet[2597]: I1213 03:44:23.966188 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-config-path\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966375 kubelet[2597]: I1213 03:44:23.966197 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-ipsec-secrets\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966375 kubelet[2597]: I1213 03:44:23.966206 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hostproc\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966375 kubelet[2597]: I1213 03:44:23.966216 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-lib-modules\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966375 kubelet[2597]: I1213 03:44:23.966227 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cni-path\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966375 kubelet[2597]: I1213 03:44:23.966237 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-xtables-lock\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:23.966375 kubelet[2597]: I1213 03:44:23.966247 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-bpf-maps\") pod \"cilium-kv287\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " pod="kube-system/cilium-kv287" Dec 13 03:44:24.089820 sshd[4867]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:24.091557 systemd[1]: sshd@29-147.75.202.71:22-139.178.68.195:55794.service: Deactivated successfully. Dec 13 03:44:24.091963 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 03:44:24.092290 systemd-logind[1551]: Session 28 logged out. Waiting for processes to exit. Dec 13 03:44:24.093056 systemd[1]: Started sshd@30-147.75.202.71:22-139.178.68.195:55806.service. Dec 13 03:44:24.093440 systemd-logind[1551]: Removed session 28. Dec 13 03:44:24.096942 env[1559]: time="2024-12-13T03:44:24.096914487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv287,Uid:d39057d3-01c1-4ab9-bfd5-4304f7793b4b,Namespace:kube-system,Attempt:0,}" Dec 13 03:44:24.102620 env[1559]: time="2024-12-13T03:44:24.102541519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:44:24.102620 env[1559]: time="2024-12-13T03:44:24.102566831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:44:24.102620 env[1559]: time="2024-12-13T03:44:24.102574128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:44:24.102730 env[1559]: time="2024-12-13T03:44:24.102650648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8 pid=4905 runtime=io.containerd.runc.v2 Dec 13 03:44:24.110247 systemd[1]: Started cri-containerd-ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8.scope. Dec 13 03:44:24.120586 env[1559]: time="2024-12-13T03:44:24.120561658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv287,Uid:d39057d3-01c1-4ab9-bfd5-4304f7793b4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\"" Dec 13 03:44:24.121798 env[1559]: time="2024-12-13T03:44:24.121782722Z" level=info msg="CreateContainer within sandbox \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:44:24.127054 env[1559]: time="2024-12-13T03:44:24.126996453Z" level=info msg="CreateContainer within sandbox \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\"" Dec 13 03:44:24.127263 env[1559]: time="2024-12-13T03:44:24.127226122Z" level=info msg="StartContainer for \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\"" Dec 13 03:44:24.130420 sshd[4896]: Accepted publickey for core from 139.178.68.195 port 55806 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:44:24.131217 sshd[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:44:24.133669 systemd-logind[1551]: New session 29 of user core. Dec 13 03:44:24.134847 systemd[1]: Started cri-containerd-28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24.scope. Dec 13 03:44:24.135279 systemd[1]: Started session-29.scope. Dec 13 03:44:24.140273 systemd[1]: cri-containerd-28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24.scope: Deactivated successfully. Dec 13 03:44:24.140427 systemd[1]: Stopped cri-containerd-28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24.scope. Dec 13 03:44:24.165318 env[1559]: time="2024-12-13T03:44:24.165164221Z" level=info msg="shim disconnected" id=28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24 Dec 13 03:44:24.165318 env[1559]: time="2024-12-13T03:44:24.165288147Z" level=warning msg="cleaning up after shim disconnected" id=28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24 namespace=k8s.io Dec 13 03:44:24.165318 env[1559]: time="2024-12-13T03:44:24.165318397Z" level=info msg="cleaning up dead shim" Dec 13 03:44:24.183598 env[1559]: time="2024-12-13T03:44:24.183471620Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4966 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T03:44:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 03:44:24.184301 env[1559]: time="2024-12-13T03:44:24.184022784Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Dec 13 03:44:24.184706 env[1559]: time="2024-12-13T03:44:24.184543733Z" level=error msg="Failed to pipe stdout of container \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\"" error="reading from a closed fifo" Dec 13 03:44:24.184921 env[1559]: time="2024-12-13T03:44:24.184627543Z" level=error msg="Failed to pipe stderr of container \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\"" error="reading from a closed fifo" Dec 13 03:44:24.188664 env[1559]: time="2024-12-13T03:44:24.188494139Z" level=error msg="StartContainer for \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 03:44:24.189128 kubelet[2597]: E1213 03:44:24.189005 2597 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24" Dec 13 03:44:24.189457 kubelet[2597]: E1213 03:44:24.189395 2597 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 03:44:24.189457 kubelet[2597]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 03:44:24.189457 kubelet[2597]: rm /hostbin/cilium-mount Dec 13 03:44:24.189794 kubelet[2597]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w56dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kv287_kube-system(d39057d3-01c1-4ab9-bfd5-4304f7793b4b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 03:44:24.189794 kubelet[2597]: E1213 03:44:24.189474 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kv287" podUID="d39057d3-01c1-4ab9-bfd5-4304f7793b4b" Dec 13 03:44:24.925742 env[1559]: time="2024-12-13T03:44:24.925644550Z" level=info msg="StopPodSandbox for \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\"" Dec 13 03:44:24.926053 env[1559]: time="2024-12-13T03:44:24.925794537Z" level=info msg="Container to stop \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:24.934576 systemd[1]: cri-containerd-ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8.scope: Deactivated successfully. Dec 13 03:44:24.957480 env[1559]: time="2024-12-13T03:44:24.957446737Z" level=info msg="shim disconnected" id=ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8 Dec 13 03:44:24.957480 env[1559]: time="2024-12-13T03:44:24.957480930Z" level=warning msg="cleaning up after shim disconnected" id=ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8 namespace=k8s.io Dec 13 03:44:24.957639 env[1559]: time="2024-12-13T03:44:24.957491720Z" level=info msg="cleaning up dead shim" Dec 13 03:44:24.962324 env[1559]: time="2024-12-13T03:44:24.962299530Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5016 runtime=io.containerd.runc.v2\n" Dec 13 03:44:24.962544 env[1559]: time="2024-12-13T03:44:24.962522538Z" level=info msg="TearDown network for sandbox \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" successfully" Dec 13 03:44:24.962544 env[1559]: time="2024-12-13T03:44:24.962541632Z" level=info msg="StopPodSandbox for \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" returns successfully" Dec 13 03:44:25.073685 kubelet[2597]: I1213 03:44:25.073576 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-net\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.073708 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-clustermesh-secrets\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.073735 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.073799 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-lib-modules\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.073866 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hostproc\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.073933 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.073952 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hubble-tls\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074060 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hostproc" (OuterVolumeSpecName: "hostproc") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074081 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-run\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074143 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074212 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-xtables-lock\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074285 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-config-path\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074333 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cni-path\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074332 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074412 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w56dl\" (UniqueName: \"kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-kube-api-access-w56dl\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.074854 kubelet[2597]: I1213 03:44:25.074461 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-etc-cni-netd\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074456 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cni-path" (OuterVolumeSpecName: "cni-path") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074507 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-kernel\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074578 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-cgroup\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074586 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074611 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074665 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-ipsec-secrets\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074708 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074744 2597 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-bpf-maps\") pod \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\" (UID: \"d39057d3-01c1-4ab9-bfd5-4304f7793b4b\") " Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074855 2597 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-etc-cni-netd\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074912 2597 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074918 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-cgroup\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074913 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074926 2597 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-lib-modules\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074934 2597 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-host-proc-sys-net\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074942 2597 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hostproc\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074947 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-run\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.075197 kubelet[2597]: I1213 03:44:25.074952 2597 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-xtables-lock\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.074942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8-rootfs.mount: Deactivated successfully. Dec 13 03:44:25.075604 kubelet[2597]: I1213 03:44:25.074956 2597 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cni-path\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.074996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8-shm.mount: Deactivated successfully. Dec 13 03:44:25.076154 kubelet[2597]: I1213 03:44:25.076115 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:25.076218 kubelet[2597]: I1213 03:44:25.076206 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:25.076258 kubelet[2597]: I1213 03:44:25.076222 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:25.076258 kubelet[2597]: I1213 03:44:25.076246 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:25.076307 kubelet[2597]: I1213 03:44:25.076288 2597 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-kube-api-access-w56dl" (OuterVolumeSpecName: "kube-api-access-w56dl") pod "d39057d3-01c1-4ab9-bfd5-4304f7793b4b" (UID: "d39057d3-01c1-4ab9-bfd5-4304f7793b4b"). InnerVolumeSpecName "kube-api-access-w56dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:25.076997 systemd[1]: var-lib-kubelet-pods-d39057d3\x2d01c1\x2d4ab9\x2dbfd5\x2d4304f7793b4b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw56dl.mount: Deactivated successfully. Dec 13 03:44:25.077043 systemd[1]: var-lib-kubelet-pods-d39057d3\x2d01c1\x2d4ab9\x2dbfd5\x2d4304f7793b4b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:44:25.077079 systemd[1]: var-lib-kubelet-pods-d39057d3\x2d01c1\x2d4ab9\x2dbfd5\x2d4304f7793b4b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 03:44:25.077113 systemd[1]: var-lib-kubelet-pods-d39057d3\x2d01c1\x2d4ab9\x2dbfd5\x2d4304f7793b4b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:44:25.175452 kubelet[2597]: I1213 03:44:25.175323 2597 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-hubble-tls\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.175452 kubelet[2597]: I1213 03:44:25.175406 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-config-path\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.175452 kubelet[2597]: I1213 03:44:25.175439 2597 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w56dl\" (UniqueName: \"kubernetes.io/projected/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-kube-api-access-w56dl\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.175452 kubelet[2597]: I1213 03:44:25.175470 2597 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.176163 kubelet[2597]: I1213 03:44:25.175498 2597 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-bpf-maps\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.176163 kubelet[2597]: I1213 03:44:25.175526 2597 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d39057d3-01c1-4ab9-bfd5-4304f7793b4b-clustermesh-secrets\") on node \"ci-3510.3.6-a-ab200a80e9\" DevicePath \"\"" Dec 13 03:44:25.706039 systemd[1]: Removed slice kubepods-burstable-podd39057d3_01c1_4ab9_bfd5_4304f7793b4b.slice. Dec 13 03:44:25.931119 kubelet[2597]: I1213 03:44:25.931027 2597 scope.go:117] "RemoveContainer" containerID="28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24" Dec 13 03:44:25.933616 env[1559]: time="2024-12-13T03:44:25.933495383Z" level=info msg="RemoveContainer for \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\"" Dec 13 03:44:25.937700 env[1559]: time="2024-12-13T03:44:25.937630580Z" level=info msg="RemoveContainer for \"28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24\" returns successfully" Dec 13 03:44:25.962990 kubelet[2597]: I1213 03:44:25.962927 2597 topology_manager.go:215] "Topology Admit Handler" podUID="d2cba16f-b1f8-4e06-9643-0fd21607ded8" podNamespace="kube-system" podName="cilium-h5xbb" Dec 13 03:44:25.962990 kubelet[2597]: E1213 03:44:25.962959 2597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d39057d3-01c1-4ab9-bfd5-4304f7793b4b" containerName="mount-cgroup" Dec 13 03:44:25.962990 kubelet[2597]: I1213 03:44:25.962976 2597 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39057d3-01c1-4ab9-bfd5-4304f7793b4b" containerName="mount-cgroup" Dec 13 03:44:25.967127 systemd[1]: Created slice kubepods-burstable-podd2cba16f_b1f8_4e06_9643_0fd21607ded8.slice. Dec 13 03:44:26.083110 kubelet[2597]: I1213 03:44:26.082971 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-bpf-maps\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.083110 kubelet[2597]: I1213 03:44:26.083079 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtghx\" (UniqueName: \"kubernetes.io/projected/d2cba16f-b1f8-4e06-9643-0fd21607ded8-kube-api-access-dtghx\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083235 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-cilium-cgroup\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083331 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2cba16f-b1f8-4e06-9643-0fd21607ded8-cilium-config-path\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083408 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-host-proc-sys-kernel\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083456 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-lib-modules\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083504 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2cba16f-b1f8-4e06-9643-0fd21607ded8-hubble-tls\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083583 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-hostproc\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083630 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2cba16f-b1f8-4e06-9643-0fd21607ded8-clustermesh-secrets\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083676 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-cni-path\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083746 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-cilium-run\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083809 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-xtables-lock\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083891 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d2cba16f-b1f8-4e06-9643-0fd21607ded8-cilium-ipsec-secrets\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083948 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-etc-cni-netd\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.084320 kubelet[2597]: I1213 03:44:26.083999 2597 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2cba16f-b1f8-4e06-9643-0fd21607ded8-host-proc-sys-net\") pod \"cilium-h5xbb\" (UID: \"d2cba16f-b1f8-4e06-9643-0fd21607ded8\") " pod="kube-system/cilium-h5xbb" Dec 13 03:44:26.269865 env[1559]: time="2024-12-13T03:44:26.269742901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5xbb,Uid:d2cba16f-b1f8-4e06-9643-0fd21607ded8,Namespace:kube-system,Attempt:0,}" Dec 13 03:44:26.277699 env[1559]: time="2024-12-13T03:44:26.277664746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:44:26.277699 env[1559]: time="2024-12-13T03:44:26.277686329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:44:26.277699 env[1559]: time="2024-12-13T03:44:26.277693199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:44:26.277824 env[1559]: time="2024-12-13T03:44:26.277757562Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08 pid=5042 runtime=io.containerd.runc.v2 Dec 13 03:44:26.285005 systemd[1]: Started cri-containerd-5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08.scope. Dec 13 03:44:26.294680 env[1559]: time="2024-12-13T03:44:26.294625278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5xbb,Uid:d2cba16f-b1f8-4e06-9643-0fd21607ded8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\"" Dec 13 03:44:26.295964 env[1559]: time="2024-12-13T03:44:26.295949417Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:44:26.302364 env[1559]: time="2024-12-13T03:44:26.302316524Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b\"" Dec 13 03:44:26.302612 env[1559]: time="2024-12-13T03:44:26.302569218Z" level=info msg="StartContainer for \"fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b\"" Dec 13 03:44:26.310053 systemd[1]: Started cri-containerd-fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b.scope. Dec 13 03:44:26.323916 env[1559]: time="2024-12-13T03:44:26.323858394Z" level=info msg="StartContainer for \"fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b\" returns successfully" Dec 13 03:44:26.329856 systemd[1]: cri-containerd-fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b.scope: Deactivated successfully. Dec 13 03:44:26.353819 env[1559]: time="2024-12-13T03:44:26.353749216Z" level=info msg="shim disconnected" id=fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b Dec 13 03:44:26.353819 env[1559]: time="2024-12-13T03:44:26.353788172Z" level=warning msg="cleaning up after shim disconnected" id=fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b namespace=k8s.io Dec 13 03:44:26.353819 env[1559]: time="2024-12-13T03:44:26.353799743Z" level=info msg="cleaning up dead shim" Dec 13 03:44:26.359095 env[1559]: time="2024-12-13T03:44:26.359032494Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5124 runtime=io.containerd.runc.v2\n" Dec 13 03:44:26.779203 kubelet[2597]: I1213 03:44:26.779059 2597 setters.go:580] "Node became not ready" node="ci-3510.3.6-a-ab200a80e9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T03:44:26Z","lastTransitionTime":"2024-12-13T03:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 03:44:26.944091 env[1559]: time="2024-12-13T03:44:26.943965425Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:44:26.961734 env[1559]: time="2024-12-13T03:44:26.961612433Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76\"" Dec 13 03:44:26.962628 env[1559]: time="2024-12-13T03:44:26.962525821Z" level=info msg="StartContainer for \"529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76\"" Dec 13 03:44:26.989145 systemd[1]: Started cri-containerd-529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76.scope. Dec 13 03:44:27.018837 env[1559]: time="2024-12-13T03:44:27.018771805Z" level=info msg="StartContainer for \"529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76\" returns successfully" Dec 13 03:44:27.030937 systemd[1]: cri-containerd-529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76.scope: Deactivated successfully. Dec 13 03:44:27.084786 env[1559]: time="2024-12-13T03:44:27.084665131Z" level=info msg="shim disconnected" id=529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76 Dec 13 03:44:27.085201 env[1559]: time="2024-12-13T03:44:27.084786044Z" level=warning msg="cleaning up after shim disconnected" id=529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76 namespace=k8s.io Dec 13 03:44:27.085201 env[1559]: time="2024-12-13T03:44:27.084817279Z" level=info msg="cleaning up dead shim" Dec 13 03:44:27.103240 env[1559]: time="2024-12-13T03:44:27.103100786Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5184 runtime=io.containerd.runc.v2\n" Dec 13 03:44:27.272455 kubelet[2597]: W1213 03:44:27.272280 2597 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd39057d3_01c1_4ab9_bfd5_4304f7793b4b.slice/cri-containerd-28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24.scope WatchSource:0}: container "28ac70da3302eb51af83bab1347066fc0a182c8c75e7bda8ee87811a60ceaa24" in namespace "k8s.io": not found Dec 13 03:44:27.697204 env[1559]: time="2024-12-13T03:44:27.697127989Z" level=info msg="StopPodSandbox for \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\"" Dec 13 03:44:27.697204 env[1559]: time="2024-12-13T03:44:27.697183712Z" level=info msg="TearDown network for sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" successfully" Dec 13 03:44:27.697330 env[1559]: time="2024-12-13T03:44:27.697208395Z" level=info msg="StopPodSandbox for \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" returns successfully" Dec 13 03:44:27.697453 env[1559]: time="2024-12-13T03:44:27.697409620Z" level=info msg="RemovePodSandbox for \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\"" Dec 13 03:44:27.697453 env[1559]: time="2024-12-13T03:44:27.697427491Z" level=info msg="Forcibly stopping sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\"" Dec 13 03:44:27.697528 env[1559]: time="2024-12-13T03:44:27.697477565Z" level=info msg="TearDown network for sandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" successfully" Dec 13 03:44:27.699737 env[1559]: time="2024-12-13T03:44:27.699714181Z" level=info msg="RemovePodSandbox \"27ba9c26bfe86156a98e6942749acb6b1d6863300bf60846ee9714f31011d8f0\" returns successfully" Dec 13 03:44:27.699970 env[1559]: time="2024-12-13T03:44:27.699951583Z" level=info msg="StopPodSandbox for \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\"" Dec 13 03:44:27.700019 env[1559]: time="2024-12-13T03:44:27.699998213Z" level=info msg="TearDown network for sandbox \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" successfully" Dec 13 03:44:27.700057 env[1559]: time="2024-12-13T03:44:27.700021129Z" level=info msg="StopPodSandbox for \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" returns successfully" Dec 13 03:44:27.700240 env[1559]: time="2024-12-13T03:44:27.700224662Z" level=info msg="RemovePodSandbox for \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\"" Dec 13 03:44:27.700316 env[1559]: time="2024-12-13T03:44:27.700241889Z" level=info msg="Forcibly stopping sandbox \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\"" Dec 13 03:44:27.700316 env[1559]: time="2024-12-13T03:44:27.700289338Z" level=info msg="TearDown network for sandbox \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" successfully" Dec 13 03:44:27.700606 kubelet[2597]: I1213 03:44:27.700590 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39057d3-01c1-4ab9-bfd5-4304f7793b4b" path="/var/lib/kubelet/pods/d39057d3-01c1-4ab9-bfd5-4304f7793b4b/volumes" Dec 13 03:44:27.701607 env[1559]: time="2024-12-13T03:44:27.701562627Z" level=info msg="RemovePodSandbox \"ad841f288d77df11273b12511cc2f50cfd8af67471d2f2280425622dcfe43cc8\" returns successfully" Dec 13 03:44:27.701787 env[1559]: time="2024-12-13T03:44:27.701741390Z" level=info msg="StopPodSandbox for \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\"" Dec 13 03:44:27.701846 env[1559]: time="2024-12-13T03:44:27.701784153Z" level=info msg="TearDown network for sandbox \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" successfully" Dec 13 03:44:27.701846 env[1559]: time="2024-12-13T03:44:27.701805069Z" level=info msg="StopPodSandbox for \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" returns successfully" Dec 13 03:44:27.702023 env[1559]: time="2024-12-13T03:44:27.701976368Z" level=info msg="RemovePodSandbox for \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\"" Dec 13 03:44:27.702023 env[1559]: time="2024-12-13T03:44:27.701998384Z" level=info msg="Forcibly stopping sandbox \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\"" Dec 13 03:44:27.702101 env[1559]: time="2024-12-13T03:44:27.702049999Z" level=info msg="TearDown network for sandbox \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" successfully" Dec 13 03:44:27.703349 env[1559]: time="2024-12-13T03:44:27.703309392Z" level=info msg="RemovePodSandbox \"77a60877b58c0fcd2519588a29ed2fe4f1733b8ebcceb1c8eb95159e72705c7f\" returns successfully" Dec 13 03:44:27.841294 kubelet[2597]: E1213 03:44:27.841217 2597 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:44:27.950989 env[1559]: time="2024-12-13T03:44:27.950734737Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:44:27.965131 env[1559]: time="2024-12-13T03:44:27.965064999Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788\"" Dec 13 03:44:27.965427 env[1559]: time="2024-12-13T03:44:27.965384186Z" level=info msg="StartContainer for \"c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788\"" Dec 13 03:44:27.967047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843660390.mount: Deactivated successfully. Dec 13 03:44:27.975470 systemd[1]: Started cri-containerd-c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788.scope. Dec 13 03:44:27.988696 env[1559]: time="2024-12-13T03:44:27.988668214Z" level=info msg="StartContainer for \"c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788\" returns successfully" Dec 13 03:44:27.990178 systemd[1]: cri-containerd-c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788.scope: Deactivated successfully. Dec 13 03:44:28.012916 env[1559]: time="2024-12-13T03:44:28.012875976Z" level=info msg="shim disconnected" id=c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788 Dec 13 03:44:28.012916 env[1559]: time="2024-12-13T03:44:28.012913420Z" level=warning msg="cleaning up after shim disconnected" id=c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788 namespace=k8s.io Dec 13 03:44:28.013088 env[1559]: time="2024-12-13T03:44:28.012922424Z" level=info msg="cleaning up dead shim" Dec 13 03:44:28.018768 env[1559]: time="2024-12-13T03:44:28.018743270Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5241 runtime=io.containerd.runc.v2\n" Dec 13 03:44:28.195538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788-rootfs.mount: Deactivated successfully. Dec 13 03:44:28.958627 env[1559]: time="2024-12-13T03:44:28.958533845Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:44:28.967878 env[1559]: time="2024-12-13T03:44:28.967827215Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9\"" Dec 13 03:44:28.968236 env[1559]: time="2024-12-13T03:44:28.968220999Z" level=info msg="StartContainer for \"b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9\"" Dec 13 03:44:28.978814 systemd[1]: Started cri-containerd-b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9.scope. Dec 13 03:44:28.989774 systemd[1]: cri-containerd-b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9.scope: Deactivated successfully. Dec 13 03:44:29.005374 env[1559]: time="2024-12-13T03:44:29.005251293Z" level=info msg="StartContainer for \"b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9\" returns successfully" Dec 13 03:44:29.044019 env[1559]: time="2024-12-13T03:44:29.043908461Z" level=info msg="shim disconnected" id=b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9 Dec 13 03:44:29.044019 env[1559]: time="2024-12-13T03:44:29.044012971Z" level=warning msg="cleaning up after shim disconnected" id=b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9 namespace=k8s.io Dec 13 03:44:29.044661 env[1559]: time="2024-12-13T03:44:29.044043652Z" level=info msg="cleaning up dead shim" Dec 13 03:44:29.060579 env[1559]: time="2024-12-13T03:44:29.060447552Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5294 runtime=io.containerd.runc.v2\n" Dec 13 03:44:29.199437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9-rootfs.mount: Deactivated successfully. Dec 13 03:44:29.967876 env[1559]: time="2024-12-13T03:44:29.967768115Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:44:29.985134 env[1559]: time="2024-12-13T03:44:29.985083340Z" level=info msg="CreateContainer within sandbox \"5e25b1345d362f53c4b78d5a1ffaad3457bbecfb55f09a34094efedb9054dd08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"39e8a2194985ea3edfb64fca1ad7fbed02777f1687dd6c2d965699d0fadee3b7\"" Dec 13 03:44:29.985382 env[1559]: time="2024-12-13T03:44:29.985369852Z" level=info msg="StartContainer for \"39e8a2194985ea3edfb64fca1ad7fbed02777f1687dd6c2d965699d0fadee3b7\"" Dec 13 03:44:29.994917 systemd[1]: Started cri-containerd-39e8a2194985ea3edfb64fca1ad7fbed02777f1687dd6c2d965699d0fadee3b7.scope. Dec 13 03:44:30.007262 env[1559]: time="2024-12-13T03:44:30.007237815Z" level=info msg="StartContainer for \"39e8a2194985ea3edfb64fca1ad7fbed02777f1687dd6c2d965699d0fadee3b7\" returns successfully" Dec 13 03:44:30.190403 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 03:44:30.385479 kubelet[2597]: W1213 03:44:30.385372 2597 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2cba16f_b1f8_4e06_9643_0fd21607ded8.slice/cri-containerd-fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b.scope WatchSource:0}: task fdc58ddefab19f5742b3d9f3e7e57139e208f2d7ec152d3581dafe2bfde98b1b not found: not found Dec 13 03:44:30.974127 kubelet[2597]: I1213 03:44:30.974096 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h5xbb" podStartSLOduration=5.9740853099999995 podStartE2EDuration="5.97408531s" podCreationTimestamp="2024-12-13 03:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:44:30.974016701 +0000 UTC m=+423.371945947" watchObservedRunningTime="2024-12-13 03:44:30.97408531 +0000 UTC m=+423.372014552" Dec 13 03:44:31.698678 kubelet[2597]: E1213 03:44:31.698619 2597 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-v7zpg" podUID="41a70788-bd3a-4b1f-ad6b-ff0ca55b2602" Dec 13 03:44:33.382343 systemd-networkd[1307]: lxc_health: Link UP Dec 13 03:44:33.401315 systemd-networkd[1307]: lxc_health: Gained carrier Dec 13 03:44:33.401484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:44:33.493384 kubelet[2597]: W1213 03:44:33.493322 2597 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2cba16f_b1f8_4e06_9643_0fd21607ded8.slice/cri-containerd-529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76.scope WatchSource:0}: task 529a2c24d59b8c3a6b73c0010c7cd6c0bfef0cbd11fff8d402735e37b528fe76 not found: not found Dec 13 03:44:35.185529 systemd-networkd[1307]: lxc_health: Gained IPv6LL Dec 13 03:44:36.598234 kubelet[2597]: W1213 03:44:36.598206 2597 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2cba16f_b1f8_4e06_9643_0fd21607ded8.slice/cri-containerd-c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788.scope WatchSource:0}: task c2af842f16842c392082aa5c3f3f34730488832fab1f75d1a6798d04f7743788 not found: not found Dec 13 03:44:37.498244 systemd[1]: Started sshd@31-147.75.202.71:22-92.255.85.189:23092.service. Dec 13 03:44:38.649322 sshd[4896]: pam_unix(sshd:session): session closed for user core Dec 13 03:44:38.650829 systemd[1]: sshd@30-147.75.202.71:22-139.178.68.195:55806.service: Deactivated successfully. Dec 13 03:44:38.651243 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 03:44:38.651686 systemd-logind[1551]: Session 29 logged out. Waiting for processes to exit. Dec 13 03:44:38.652223 systemd-logind[1551]: Removed session 29. Dec 13 03:44:39.324819 sshd[6101]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.189 user=root Dec 13 03:44:39.708481 kubelet[2597]: W1213 03:44:39.708235 2597 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2cba16f_b1f8_4e06_9643_0fd21607ded8.slice/cri-containerd-b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9.scope WatchSource:0}: task b07cfa8e4f3d1f195eadabf14744f19dc4e5eac1b7cc48792b1fb2b703a06cd9 not found: not found