Apr 12 20:18:54.554834 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 20:18:54.554847 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 20:18:54.554854 kernel: BIOS-provided physical RAM map: Apr 12 20:18:54.554858 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 12 20:18:54.554862 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 12 20:18:54.554865 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 12 20:18:54.554870 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 12 20:18:54.554874 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 12 20:18:54.554878 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000820e1fff] usable Apr 12 20:18:54.554882 kernel: BIOS-e820: [mem 0x00000000820e2000-0x00000000820e2fff] ACPI NVS Apr 12 20:18:54.554886 kernel: BIOS-e820: [mem 0x00000000820e3000-0x00000000820e3fff] reserved Apr 12 20:18:54.554890 kernel: BIOS-e820: [mem 0x00000000820e4000-0x000000008afccfff] usable Apr 12 20:18:54.554894 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Apr 12 20:18:54.554898 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Apr 12 20:18:54.554903 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Apr 12 20:18:54.554908 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Apr 12 20:18:54.554913 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 12 20:18:54.554917 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 12 20:18:54.554921 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 12 20:18:54.554925 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 12 20:18:54.554929 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 12 20:18:54.554934 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 12 20:18:54.554938 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 12 20:18:54.554942 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 12 20:18:54.554946 kernel: NX (Execute Disable) protection: active Apr 12 20:18:54.554951 kernel: SMBIOS 3.2.1 present. Apr 12 20:18:54.554956 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Apr 12 20:18:54.554960 kernel: tsc: Detected 3400.000 MHz processor Apr 12 20:18:54.554964 kernel: tsc: Detected 3399.906 MHz TSC Apr 12 20:18:54.554969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 20:18:54.554974 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 20:18:54.554978 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 12 20:18:54.554982 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 20:18:54.554987 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 12 20:18:54.554991 kernel: Using GB pages for direct mapping Apr 12 20:18:54.554996 kernel: ACPI: Early table checksum verification disabled Apr 12 20:18:54.555001 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 12 20:18:54.555005 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 12 20:18:54.555010 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Apr 12 20:18:54.555014 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 12 20:18:54.555020 kernel: ACPI: FACS 0x000000008C66CF80 000040 Apr 12 20:18:54.555025 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Apr 12 20:18:54.555030 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Apr 12 20:18:54.555035 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 12 20:18:54.555040 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 12 20:18:54.555045 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 12 20:18:54.555049 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 12 20:18:54.555054 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 12 20:18:54.555059 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 12 20:18:54.555063 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 12 20:18:54.555069 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 12 20:18:54.555074 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 12 20:18:54.555078 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 12 20:18:54.555083 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 12 20:18:54.555088 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 12 20:18:54.555092 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 12 20:18:54.555097 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 12 20:18:54.555102 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 12 20:18:54.555107 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 12 20:18:54.555112 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Apr 12 20:18:54.555117 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 12 20:18:54.555121 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 12 20:18:54.555126 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 12 20:18:54.555131 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 12 20:18:54.555136 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 12 20:18:54.555140 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 12 20:18:54.555145 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 12 20:18:54.555151 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 12 20:18:54.555155 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 12 20:18:54.555160 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Apr 12 20:18:54.555165 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Apr 12 20:18:54.555169 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Apr 12 20:18:54.555174 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Apr 12 20:18:54.555179 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Apr 12 20:18:54.555183 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Apr 12 20:18:54.555188 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Apr 12 20:18:54.555193 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Apr 12 20:18:54.555198 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Apr 12 20:18:54.555203 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Apr 12 20:18:54.555208 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Apr 12 20:18:54.555212 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Apr 12 20:18:54.555217 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Apr 12 20:18:54.555222 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Apr 12 20:18:54.555226 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Apr 12 20:18:54.555233 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Apr 12 20:18:54.555239 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Apr 12 20:18:54.555244 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Apr 12 20:18:54.555248 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Apr 12 20:18:54.555253 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Apr 12 20:18:54.555258 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Apr 12 20:18:54.555262 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Apr 12 20:18:54.555267 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Apr 12 20:18:54.555272 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Apr 12 20:18:54.555277 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Apr 12 20:18:54.555296 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Apr 12 20:18:54.555301 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Apr 12 20:18:54.555305 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Apr 12 20:18:54.555310 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Apr 12 20:18:54.555314 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Apr 12 20:18:54.555319 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Apr 12 20:18:54.555323 kernel: No NUMA configuration found Apr 12 20:18:54.555328 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 12 20:18:54.555333 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 12 20:18:54.555338 kernel: Zone ranges: Apr 12 20:18:54.555343 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 20:18:54.555347 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 12 20:18:54.555352 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 12 20:18:54.555356 kernel: Movable zone start for each node Apr 12 20:18:54.555361 kernel: Early memory node ranges Apr 12 20:18:54.555366 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 12 20:18:54.555370 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 12 20:18:54.555375 kernel: node 0: [mem 0x0000000040400000-0x00000000820e1fff] Apr 12 20:18:54.555380 kernel: node 0: [mem 0x00000000820e4000-0x000000008afccfff] Apr 12 20:18:54.555385 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Apr 12 20:18:54.555389 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 12 20:18:54.555394 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 12 20:18:54.555399 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 12 20:18:54.555403 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 20:18:54.555411 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 12 20:18:54.555417 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 12 20:18:54.555421 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 12 20:18:54.555426 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 12 20:18:54.555432 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Apr 12 20:18:54.555437 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 12 20:18:54.555442 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 12 20:18:54.555447 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 12 20:18:54.555452 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 12 20:18:54.555457 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 12 20:18:54.555462 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 12 20:18:54.555467 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 12 20:18:54.555472 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 12 20:18:54.555477 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 12 20:18:54.555482 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 12 20:18:54.555487 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 12 20:18:54.555492 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 12 20:18:54.555497 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 12 20:18:54.555501 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 12 20:18:54.555506 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 12 20:18:54.555512 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 12 20:18:54.555517 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 12 20:18:54.555522 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 12 20:18:54.555526 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 12 20:18:54.555531 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 12 20:18:54.555536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 12 20:18:54.555541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 20:18:54.555546 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 20:18:54.555551 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 12 20:18:54.555557 kernel: TSC deadline timer available Apr 12 20:18:54.555562 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 12 20:18:54.555567 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 12 20:18:54.555572 kernel: Booting paravirtualized kernel on bare hardware Apr 12 20:18:54.555577 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 20:18:54.555582 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Apr 12 20:18:54.555587 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Apr 12 20:18:54.555591 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Apr 12 20:18:54.555596 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 12 20:18:54.555602 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Apr 12 20:18:54.555607 kernel: Policy zone: Normal Apr 12 20:18:54.555612 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 20:18:54.555617 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 20:18:54.555622 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 12 20:18:54.555627 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 12 20:18:54.555632 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 20:18:54.555637 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 730116K reserved, 0K cma-reserved) Apr 12 20:18:54.555643 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 12 20:18:54.555648 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 20:18:54.555653 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 20:18:54.555658 kernel: rcu: Hierarchical RCU implementation. Apr 12 20:18:54.555663 kernel: rcu: RCU event tracing is enabled. Apr 12 20:18:54.555668 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 12 20:18:54.555673 kernel: Rude variant of Tasks RCU enabled. Apr 12 20:18:54.555678 kernel: Tracing variant of Tasks RCU enabled. Apr 12 20:18:54.555683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 20:18:54.555689 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 12 20:18:54.555694 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 12 20:18:54.555699 kernel: random: crng init done Apr 12 20:18:54.555703 kernel: Console: colour dummy device 80x25 Apr 12 20:18:54.555708 kernel: printk: console [tty0] enabled Apr 12 20:18:54.555713 kernel: printk: console [ttyS1] enabled Apr 12 20:18:54.555718 kernel: ACPI: Core revision 20210730 Apr 12 20:18:54.555723 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Apr 12 20:18:54.555728 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 20:18:54.555734 kernel: DMAR: Host address width 39 Apr 12 20:18:54.555739 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 12 20:18:54.555744 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 12 20:18:54.555749 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Apr 12 20:18:54.555753 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 12 20:18:54.555758 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 12 20:18:54.555763 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 12 20:18:54.555768 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 12 20:18:54.555773 kernel: x2apic enabled Apr 12 20:18:54.555779 kernel: Switched APIC routing to cluster x2apic. Apr 12 20:18:54.555784 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 12 20:18:54.555789 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 12 20:18:54.555794 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 12 20:18:54.555799 kernel: process: using mwait in idle threads Apr 12 20:18:54.555803 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 12 20:18:54.555808 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Apr 12 20:18:54.555813 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 20:18:54.555818 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Apr 12 20:18:54.555824 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 12 20:18:54.555829 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 12 20:18:54.555833 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 12 20:18:54.555839 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 20:18:54.555843 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 12 20:18:54.555848 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 12 20:18:54.555853 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 20:18:54.555858 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 20:18:54.555863 kernel: TAA: Mitigation: TSX disabled Apr 12 20:18:54.555868 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 12 20:18:54.555873 kernel: SRBDS: Mitigation: Microcode Apr 12 20:18:54.555878 kernel: GDS: Vulnerable: No microcode Apr 12 20:18:54.555883 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 20:18:54.555888 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 20:18:54.555893 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 20:18:54.555897 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 12 20:18:54.555902 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 12 20:18:54.555907 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 20:18:54.555912 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 12 20:18:54.555917 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 12 20:18:54.555922 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 12 20:18:54.555926 kernel: Freeing SMP alternatives memory: 32K Apr 12 20:18:54.555932 kernel: pid_max: default: 32768 minimum: 301 Apr 12 20:18:54.555937 kernel: LSM: Security Framework initializing Apr 12 20:18:54.555942 kernel: SELinux: Initializing. Apr 12 20:18:54.555946 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 20:18:54.555951 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 20:18:54.555956 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 12 20:18:54.555961 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 12 20:18:54.555966 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 12 20:18:54.555971 kernel: ... version: 4 Apr 12 20:18:54.555976 kernel: ... bit width: 48 Apr 12 20:18:54.555981 kernel: ... generic registers: 4 Apr 12 20:18:54.555987 kernel: ... value mask: 0000ffffffffffff Apr 12 20:18:54.555991 kernel: ... max period: 00007fffffffffff Apr 12 20:18:54.555996 kernel: ... fixed-purpose events: 3 Apr 12 20:18:54.556001 kernel: ... event mask: 000000070000000f Apr 12 20:18:54.556006 kernel: signal: max sigframe size: 2032 Apr 12 20:18:54.556011 kernel: rcu: Hierarchical SRCU implementation. Apr 12 20:18:54.556016 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 12 20:18:54.556021 kernel: smp: Bringing up secondary CPUs ... Apr 12 20:18:54.556026 kernel: x86: Booting SMP configuration: Apr 12 20:18:54.556031 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Apr 12 20:18:54.556036 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 12 20:18:54.556041 kernel: #9 #10 #11 #12 #13 #14 #15 Apr 12 20:18:54.556046 kernel: smp: Brought up 1 node, 16 CPUs Apr 12 20:18:54.556051 kernel: smpboot: Max logical packages: 1 Apr 12 20:18:54.556056 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 12 20:18:54.556061 kernel: devtmpfs: initialized Apr 12 20:18:54.556066 kernel: x86/mm: Memory block size: 128MB Apr 12 20:18:54.556071 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x820e2000-0x820e2fff] (4096 bytes) Apr 12 20:18:54.556077 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Apr 12 20:18:54.556082 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 20:18:54.556086 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 12 20:18:54.556091 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 20:18:54.556097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 20:18:54.556101 kernel: audit: initializing netlink subsys (disabled) Apr 12 20:18:54.556106 kernel: audit: type=2000 audit(1712953129.041:1): state=initialized audit_enabled=0 res=1 Apr 12 20:18:54.556111 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 20:18:54.556116 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 20:18:54.556122 kernel: cpuidle: using governor menu Apr 12 20:18:54.556127 kernel: ACPI: bus type PCI registered Apr 12 20:18:54.556131 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 20:18:54.556136 kernel: dca service started, version 1.12.1 Apr 12 20:18:54.556141 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 12 20:18:54.556146 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Apr 12 20:18:54.556151 kernel: PCI: Using configuration type 1 for base access Apr 12 20:18:54.556156 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 12 20:18:54.556161 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 20:18:54.556166 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 20:18:54.556171 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 20:18:54.556176 kernel: ACPI: Added _OSI(Module Device) Apr 12 20:18:54.556181 kernel: ACPI: Added _OSI(Processor Device) Apr 12 20:18:54.556186 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 20:18:54.556191 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 20:18:54.556196 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 20:18:54.556200 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 20:18:54.556205 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 20:18:54.556211 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 12 20:18:54.556216 kernel: ACPI: Dynamic OEM Table Load: Apr 12 20:18:54.556221 kernel: ACPI: SSDT 0xFFFF88C980221D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 12 20:18:54.556226 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Apr 12 20:18:54.556232 kernel: ACPI: Dynamic OEM Table Load: Apr 12 20:18:54.556237 kernel: ACPI: SSDT 0xFFFF88C981AEA400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 12 20:18:54.556242 kernel: ACPI: Dynamic OEM Table Load: Apr 12 20:18:54.556247 kernel: ACPI: SSDT 0xFFFF88C981A60800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 12 20:18:54.556252 kernel: ACPI: Dynamic OEM Table Load: Apr 12 20:18:54.556257 kernel: ACPI: SSDT 0xFFFF88C981B51800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 12 20:18:54.556262 kernel: ACPI: Dynamic OEM Table Load: Apr 12 20:18:54.556267 kernel: ACPI: SSDT 0xFFFF88C980151000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 12 20:18:54.556272 kernel: ACPI: Dynamic OEM Table Load: Apr 12 20:18:54.556277 kernel: ACPI: SSDT 0xFFFF88C981AE8800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 12 20:18:54.556282 kernel: ACPI: Interpreter enabled Apr 12 20:18:54.556286 kernel: ACPI: PM: (supports S0 S5) Apr 12 20:18:54.556291 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 20:18:54.556296 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 12 20:18:54.556301 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 12 20:18:54.556307 kernel: HEST: Table parsing has been initialized. Apr 12 20:18:54.556312 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 12 20:18:54.556316 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 20:18:54.556321 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 12 20:18:54.556326 kernel: ACPI: PM: Power Resource [USBC] Apr 12 20:18:54.556331 kernel: ACPI: PM: Power Resource [V0PR] Apr 12 20:18:54.556336 kernel: ACPI: PM: Power Resource [V1PR] Apr 12 20:18:54.556341 kernel: ACPI: PM: Power Resource [V2PR] Apr 12 20:18:54.556346 kernel: ACPI: PM: Power Resource [WRST] Apr 12 20:18:54.556351 kernel: ACPI: PM: Power Resource [FN00] Apr 12 20:18:54.556356 kernel: ACPI: PM: Power Resource [FN01] Apr 12 20:18:54.556361 kernel: ACPI: PM: Power Resource [FN02] Apr 12 20:18:54.556366 kernel: ACPI: PM: Power Resource [FN03] Apr 12 20:18:54.556370 kernel: ACPI: PM: Power Resource [FN04] Apr 12 20:18:54.556375 kernel: ACPI: PM: Power Resource [PIN] Apr 12 20:18:54.556380 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 12 20:18:54.556444 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 20:18:54.556491 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 12 20:18:54.556533 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 12 20:18:54.556540 kernel: PCI host bridge to bus 0000:00 Apr 12 20:18:54.556583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 20:18:54.556621 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 20:18:54.556658 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 20:18:54.556695 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 12 20:18:54.556733 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 12 20:18:54.556770 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 12 20:18:54.556819 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 12 20:18:54.556870 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 12 20:18:54.556914 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.556960 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 12 20:18:54.557004 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 12 20:18:54.557050 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 12 20:18:54.557093 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 12 20:18:54.557139 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 12 20:18:54.557181 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 12 20:18:54.557225 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 12 20:18:54.557274 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 12 20:18:54.557317 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 12 20:18:54.557358 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 12 20:18:54.557404 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 12 20:18:54.557446 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 12 20:18:54.557493 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 12 20:18:54.557537 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 12 20:18:54.557582 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 12 20:18:54.557625 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 12 20:18:54.557666 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 12 20:18:54.557711 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 12 20:18:54.557751 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 12 20:18:54.557793 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 12 20:18:54.557839 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 12 20:18:54.557881 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 12 20:18:54.557922 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 12 20:18:54.557966 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 12 20:18:54.558008 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 12 20:18:54.558050 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 12 20:18:54.558097 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 12 20:18:54.558141 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 12 20:18:54.558183 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 12 20:18:54.558223 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 12 20:18:54.558267 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 12 20:18:54.558313 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 12 20:18:54.558356 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.558405 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 12 20:18:54.558449 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.558495 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 12 20:18:54.558539 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.558585 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 12 20:18:54.558627 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.558675 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Apr 12 20:18:54.558719 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.558766 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 12 20:18:54.558809 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 12 20:18:54.558856 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 12 20:18:54.558902 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 12 20:18:54.558944 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 12 20:18:54.558985 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 12 20:18:54.559032 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 12 20:18:54.559074 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 12 20:18:54.559125 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Apr 12 20:18:54.559169 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 12 20:18:54.559213 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 12 20:18:54.559258 kernel: pci 0000:01:00.0: PME# supported from D3cold Apr 12 20:18:54.559302 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 12 20:18:54.559344 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 12 20:18:54.559393 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Apr 12 20:18:54.559438 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 12 20:18:54.559482 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 12 20:18:54.559525 kernel: pci 0000:01:00.1: PME# supported from D3cold Apr 12 20:18:54.559568 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 12 20:18:54.559610 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 12 20:18:54.559653 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 12 20:18:54.559695 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 12 20:18:54.559739 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 12 20:18:54.559781 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 12 20:18:54.559828 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Apr 12 20:18:54.559908 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Apr 12 20:18:54.559968 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 12 20:18:54.560012 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Apr 12 20:18:54.560054 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 12 20:18:54.560097 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.560142 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 12 20:18:54.560183 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 12 20:18:54.560225 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 12 20:18:54.560307 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 12 20:18:54.560352 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 12 20:18:54.560395 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 12 20:18:54.560439 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Apr 12 20:18:54.560483 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 12 20:18:54.560526 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 12 20:18:54.560568 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 12 20:18:54.560611 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 12 20:18:54.560652 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 12 20:18:54.560694 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 12 20:18:54.560743 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Apr 12 20:18:54.560787 kernel: pci 0000:06:00.0: enabling Extended Tags Apr 12 20:18:54.560832 kernel: pci 0000:06:00.0: supports D1 D2 Apr 12 20:18:54.560874 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 12 20:18:54.560917 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 12 20:18:54.560958 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 12 20:18:54.561000 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 12 20:18:54.561049 kernel: pci_bus 0000:07: extended config space not accessible Apr 12 20:18:54.561099 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Apr 12 20:18:54.561147 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 12 20:18:54.561193 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 12 20:18:54.561257 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Apr 12 20:18:54.561324 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 12 20:18:54.561370 kernel: pci 0000:07:00.0: supports D1 D2 Apr 12 20:18:54.561415 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 12 20:18:54.561458 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 12 20:18:54.561504 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 12 20:18:54.561548 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 12 20:18:54.561555 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 12 20:18:54.561562 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 12 20:18:54.561567 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 12 20:18:54.561573 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 12 20:18:54.561578 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 12 20:18:54.561583 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 12 20:18:54.561589 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 12 20:18:54.561595 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 12 20:18:54.561600 kernel: iommu: Default domain type: Translated Apr 12 20:18:54.561605 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 20:18:54.561649 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Apr 12 20:18:54.561695 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 12 20:18:54.561740 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Apr 12 20:18:54.561747 kernel: vgaarb: loaded Apr 12 20:18:54.561753 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 20:18:54.561759 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 20:18:54.561765 kernel: PTP clock support registered Apr 12 20:18:54.561770 kernel: PCI: Using ACPI for IRQ routing Apr 12 20:18:54.561776 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 20:18:54.561781 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 12 20:18:54.561787 kernel: e820: reserve RAM buffer [mem 0x820e2000-0x83ffffff] Apr 12 20:18:54.561792 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Apr 12 20:18:54.561797 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Apr 12 20:18:54.561802 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 12 20:18:54.561808 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 12 20:18:54.561813 kernel: clocksource: Switched to clocksource tsc-early Apr 12 20:18:54.561818 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 20:18:54.561823 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 20:18:54.561828 kernel: pnp: PnP ACPI init Apr 12 20:18:54.561872 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 12 20:18:54.561915 kernel: pnp 00:02: [dma 0 disabled] Apr 12 20:18:54.561957 kernel: pnp 00:03: [dma 0 disabled] Apr 12 20:18:54.562002 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 12 20:18:54.562040 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 12 20:18:54.562081 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 12 20:18:54.562122 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 12 20:18:54.562159 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 12 20:18:54.562196 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 12 20:18:54.562256 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 12 20:18:54.562314 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 12 20:18:54.562351 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 12 20:18:54.562388 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 12 20:18:54.562425 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 12 20:18:54.562465 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 12 20:18:54.562503 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 12 20:18:54.562542 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 12 20:18:54.562578 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 12 20:18:54.562616 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 12 20:18:54.562652 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 12 20:18:54.562690 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 12 20:18:54.562731 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 12 20:18:54.562738 kernel: pnp: PnP ACPI: found 10 devices Apr 12 20:18:54.562745 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 20:18:54.562750 kernel: NET: Registered PF_INET protocol family Apr 12 20:18:54.562756 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 20:18:54.562761 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 12 20:18:54.562766 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 20:18:54.562772 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 20:18:54.562777 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Apr 12 20:18:54.562782 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 12 20:18:54.562788 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 12 20:18:54.562794 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 12 20:18:54.562799 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 20:18:54.562804 kernel: NET: Registered PF_XDP protocol family Apr 12 20:18:54.562847 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 12 20:18:54.562889 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 12 20:18:54.562930 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 12 20:18:54.562976 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 12 20:18:54.563021 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 12 20:18:54.563067 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 12 20:18:54.563111 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 12 20:18:54.563153 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 12 20:18:54.563195 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 12 20:18:54.563258 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 12 20:18:54.563319 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 12 20:18:54.563364 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 12 20:18:54.563405 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 12 20:18:54.563446 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 12 20:18:54.563488 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 12 20:18:54.563530 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 12 20:18:54.563572 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 12 20:18:54.563614 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 12 20:18:54.563659 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 12 20:18:54.563701 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 12 20:18:54.563745 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 12 20:18:54.563787 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 12 20:18:54.563829 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 12 20:18:54.563871 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 12 20:18:54.563909 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 12 20:18:54.563947 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 20:18:54.563983 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 20:18:54.564021 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 20:18:54.564058 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 12 20:18:54.564094 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 12 20:18:54.564137 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Apr 12 20:18:54.564177 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 12 20:18:54.564222 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Apr 12 20:18:54.564302 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Apr 12 20:18:54.564344 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 12 20:18:54.564384 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Apr 12 20:18:54.564426 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Apr 12 20:18:54.564466 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Apr 12 20:18:54.564507 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 12 20:18:54.564549 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 12 20:18:54.564557 kernel: PCI: CLS 64 bytes, default 64 Apr 12 20:18:54.564563 kernel: DMAR: No ATSR found Apr 12 20:18:54.564568 kernel: DMAR: No SATC found Apr 12 20:18:54.564573 kernel: DMAR: dmar0: Using Queued invalidation Apr 12 20:18:54.564616 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 12 20:18:54.564659 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 12 20:18:54.564701 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 12 20:18:54.564743 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 12 20:18:54.564787 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 12 20:18:54.564829 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 12 20:18:54.564870 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 12 20:18:54.564911 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 12 20:18:54.564953 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 12 20:18:54.564994 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 12 20:18:54.565035 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 12 20:18:54.565076 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 12 20:18:54.565117 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 12 20:18:54.565161 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 12 20:18:54.565204 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 12 20:18:54.565248 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 12 20:18:54.565290 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Apr 12 20:18:54.565331 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 12 20:18:54.565372 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 12 20:18:54.565414 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 12 20:18:54.565457 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 12 20:18:54.565502 kernel: pci 0000:01:00.0: Adding to iommu group 1 Apr 12 20:18:54.565547 kernel: pci 0000:01:00.1: Adding to iommu group 1 Apr 12 20:18:54.565589 kernel: pci 0000:03:00.0: Adding to iommu group 15 Apr 12 20:18:54.565633 kernel: pci 0000:04:00.0: Adding to iommu group 16 Apr 12 20:18:54.565676 kernel: pci 0000:06:00.0: Adding to iommu group 17 Apr 12 20:18:54.565721 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 12 20:18:54.565728 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 12 20:18:54.565734 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 12 20:18:54.565740 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Apr 12 20:18:54.565746 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 12 20:18:54.565751 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 12 20:18:54.565756 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 12 20:18:54.565761 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 12 20:18:54.565806 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 12 20:18:54.565814 kernel: Initialise system trusted keyrings Apr 12 20:18:54.565819 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 12 20:18:54.565825 kernel: Key type asymmetric registered Apr 12 20:18:54.565831 kernel: Asymmetric key parser 'x509' registered Apr 12 20:18:54.565836 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 20:18:54.565841 kernel: io scheduler mq-deadline registered Apr 12 20:18:54.565846 kernel: io scheduler kyber registered Apr 12 20:18:54.565852 kernel: io scheduler bfq registered Apr 12 20:18:54.565894 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 12 20:18:54.565936 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Apr 12 20:18:54.565979 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Apr 12 20:18:54.566023 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Apr 12 20:18:54.566064 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Apr 12 20:18:54.566107 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Apr 12 20:18:54.566153 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 12 20:18:54.566161 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 12 20:18:54.566166 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 12 20:18:54.566171 kernel: pstore: Registered erst as persistent store backend Apr 12 20:18:54.566178 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 20:18:54.566183 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 20:18:54.566189 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 20:18:54.566194 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 12 20:18:54.566199 kernel: hpet_acpi_add: no address or irqs in _CRS Apr 12 20:18:54.566245 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 12 20:18:54.566253 kernel: i8042: PNP: No PS/2 controller found. Apr 12 20:18:54.566291 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 12 20:18:54.566331 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 12 20:18:54.566370 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-04-12T20:18:53 UTC (1712953133) Apr 12 20:18:54.566407 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 12 20:18:54.566414 kernel: fail to initialize ptp_kvm Apr 12 20:18:54.566419 kernel: intel_pstate: Intel P-state driver initializing Apr 12 20:18:54.566425 kernel: intel_pstate: Disabling energy efficiency optimization Apr 12 20:18:54.566430 kernel: intel_pstate: HWP enabled Apr 12 20:18:54.566435 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Apr 12 20:18:54.566441 kernel: vesafb: scrolling: redraw Apr 12 20:18:54.566447 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Apr 12 20:18:54.566452 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000dfa33d59, using 768k, total 768k Apr 12 20:18:54.566457 kernel: Console: switching to colour frame buffer device 128x48 Apr 12 20:18:54.566463 kernel: fb0: VESA VGA frame buffer device Apr 12 20:18:54.566468 kernel: NET: Registered PF_INET6 protocol family Apr 12 20:18:54.566473 kernel: Segment Routing with IPv6 Apr 12 20:18:54.566478 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 20:18:54.566484 kernel: NET: Registered PF_PACKET protocol family Apr 12 20:18:54.566489 kernel: Key type dns_resolver registered Apr 12 20:18:54.566495 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Apr 12 20:18:54.566500 kernel: microcode: Microcode Update Driver: v2.2. Apr 12 20:18:54.566505 kernel: IPI shorthand broadcast: enabled Apr 12 20:18:54.566511 kernel: sched_clock: Marking stable (1682591092, 1339856369)->(4463774000, -1441326539) Apr 12 20:18:54.566516 kernel: registered taskstats version 1 Apr 12 20:18:54.566521 kernel: Loading compiled-in X.509 certificates Apr 12 20:18:54.566526 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 20:18:54.566531 kernel: Key type .fscrypt registered Apr 12 20:18:54.566536 kernel: Key type fscrypt-provisioning registered Apr 12 20:18:54.566543 kernel: pstore: Using crash dump compression: deflate Apr 12 20:18:54.566548 kernel: ima: Allocated hash algorithm: sha1 Apr 12 20:18:54.566553 kernel: ima: No architecture policies found Apr 12 20:18:54.566559 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 20:18:54.566564 kernel: Write protecting the kernel read-only data: 28672k Apr 12 20:18:54.566569 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 20:18:54.566574 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 20:18:54.566580 kernel: Run /init as init process Apr 12 20:18:54.566585 kernel: with arguments: Apr 12 20:18:54.566591 kernel: /init Apr 12 20:18:54.566596 kernel: with environment: Apr 12 20:18:54.566601 kernel: HOME=/ Apr 12 20:18:54.566606 kernel: TERM=linux Apr 12 20:18:54.566611 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 20:18:54.566618 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 20:18:54.566624 systemd[1]: Detected architecture x86-64. Apr 12 20:18:54.566631 systemd[1]: Running in initrd. Apr 12 20:18:54.566636 systemd[1]: No hostname configured, using default hostname. Apr 12 20:18:54.566641 systemd[1]: Hostname set to . Apr 12 20:18:54.566647 systemd[1]: Initializing machine ID from random generator. Apr 12 20:18:54.566652 systemd[1]: Queued start job for default target initrd.target. Apr 12 20:18:54.566658 systemd[1]: Started systemd-ask-password-console.path. Apr 12 20:18:54.566663 systemd[1]: Reached target cryptsetup.target. Apr 12 20:18:54.566668 systemd[1]: Reached target paths.target. Apr 12 20:18:54.566673 systemd[1]: Reached target slices.target. Apr 12 20:18:54.566679 systemd[1]: Reached target swap.target. Apr 12 20:18:54.566684 systemd[1]: Reached target timers.target. Apr 12 20:18:54.566690 systemd[1]: Listening on iscsid.socket. Apr 12 20:18:54.566695 systemd[1]: Listening on iscsiuio.socket. Apr 12 20:18:54.566701 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 20:18:54.566706 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 20:18:54.566712 systemd[1]: Listening on systemd-journald.socket. Apr 12 20:18:54.566718 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Apr 12 20:18:54.566723 systemd[1]: Listening on systemd-networkd.socket. Apr 12 20:18:54.566729 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Apr 12 20:18:54.566734 kernel: clocksource: Switched to clocksource tsc Apr 12 20:18:54.566739 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 20:18:54.566745 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 20:18:54.566750 systemd[1]: Reached target sockets.target. Apr 12 20:18:54.566755 systemd[1]: Starting kmod-static-nodes.service... Apr 12 20:18:54.566761 systemd[1]: Finished network-cleanup.service. Apr 12 20:18:54.566767 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 20:18:54.566772 systemd[1]: Starting systemd-journald.service... Apr 12 20:18:54.566778 systemd[1]: Starting systemd-modules-load.service... Apr 12 20:18:54.566785 systemd-journald[267]: Journal started Apr 12 20:18:54.566811 systemd-journald[267]: Runtime Journal (/run/log/journal/9cbd03ec6a6f4ff2ade7f531db3c9c4a) is 8.0M, max 640.1M, 632.1M free. Apr 12 20:18:54.568922 systemd-modules-load[268]: Inserted module 'overlay' Apr 12 20:18:54.574000 audit: BPF prog-id=6 op=LOAD Apr 12 20:18:54.593237 kernel: audit: type=1334 audit(1712953134.574:2): prog-id=6 op=LOAD Apr 12 20:18:54.593270 systemd[1]: Starting systemd-resolved.service... Apr 12 20:18:54.644264 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 20:18:54.644279 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 20:18:54.678284 kernel: Bridge firewalling registered Apr 12 20:18:54.678300 systemd[1]: Started systemd-journald.service. Apr 12 20:18:54.692263 systemd-modules-load[268]: Inserted module 'br_netfilter' Apr 12 20:18:54.733675 kernel: audit: type=1130 audit(1712953134.691:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.692571 systemd[1]: Finished kmod-static-nodes.service. Apr 12 20:18:54.782791 kernel: audit: type=1130 audit(1712953134.740:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.694777 systemd-resolved[271]: Positive Trust Anchors: Apr 12 20:18:54.842995 kernel: SCSI subsystem initialized Apr 12 20:18:54.843008 kernel: audit: type=1130 audit(1712953134.799:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.694783 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 20:18:54.961369 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 20:18:54.961396 kernel: audit: type=1130 audit(1712953134.867:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.961421 kernel: device-mapper: uevent: version 1.0.3 Apr 12 20:18:54.961428 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 20:18:54.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.694802 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 20:18:55.033411 kernel: audit: type=1130 audit(1712953134.960:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:54.696325 systemd-resolved[271]: Defaulting to hostname 'linux'. Apr 12 20:18:54.741338 systemd[1]: Started systemd-resolved.service. Apr 12 20:18:54.800364 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 20:18:54.868568 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 20:18:54.961570 systemd[1]: Reached target nss-lookup.target. Apr 12 20:18:55.008195 systemd-modules-load[268]: Inserted module 'dm_multipath' Apr 12 20:18:55.042209 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 20:18:55.050225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 20:18:55.050512 systemd[1]: Finished systemd-modules-load.service. Apr 12 20:18:55.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.051367 systemd[1]: Starting systemd-sysctl.service... Apr 12 20:18:55.097277 kernel: audit: type=1130 audit(1712953135.049:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.110031 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 20:18:55.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.118549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 20:18:55.223720 kernel: audit: type=1130 audit(1712953135.117:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.223731 kernel: audit: type=1130 audit(1712953135.173:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.174541 systemd[1]: Finished systemd-sysctl.service. Apr 12 20:18:55.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.232923 systemd[1]: Starting dracut-cmdline.service... Apr 12 20:18:55.253318 dracut-cmdline[292]: dracut-dracut-053 Apr 12 20:18:55.253318 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Apr 12 20:18:55.253318 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 20:18:55.323300 kernel: Loading iSCSI transport class v2.0-870. Apr 12 20:18:55.323313 kernel: iscsi: registered transport (tcp) Apr 12 20:18:55.380518 kernel: iscsi: registered transport (qla4xxx) Apr 12 20:18:55.380569 kernel: QLogic iSCSI HBA Driver Apr 12 20:18:55.396112 systemd[1]: Finished dracut-cmdline.service. Apr 12 20:18:55.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:55.405953 systemd[1]: Starting dracut-pre-udev.service... Apr 12 20:18:55.464317 kernel: raid6: avx2x4 gen() 37047 MB/s Apr 12 20:18:55.499267 kernel: raid6: avx2x4 xor() 21777 MB/s Apr 12 20:18:55.534303 kernel: raid6: avx2x2 gen() 53840 MB/s Apr 12 20:18:55.569267 kernel: raid6: avx2x2 xor() 32055 MB/s Apr 12 20:18:55.604308 kernel: raid6: avx2x1 gen() 45272 MB/s Apr 12 20:18:55.639266 kernel: raid6: avx2x1 xor() 27822 MB/s Apr 12 20:18:55.673266 kernel: raid6: sse2x4 gen() 21355 MB/s Apr 12 20:18:55.707307 kernel: raid6: sse2x4 xor() 11985 MB/s Apr 12 20:18:55.741267 kernel: raid6: sse2x2 gen() 21676 MB/s Apr 12 20:18:55.775267 kernel: raid6: sse2x2 xor() 13392 MB/s Apr 12 20:18:55.809268 kernel: raid6: sse2x1 gen() 18316 MB/s Apr 12 20:18:55.861216 kernel: raid6: sse2x1 xor() 8890 MB/s Apr 12 20:18:55.861234 kernel: raid6: using algorithm avx2x2 gen() 53840 MB/s Apr 12 20:18:55.861243 kernel: raid6: .... xor() 32055 MB/s, rmw enabled Apr 12 20:18:55.879477 kernel: raid6: using avx2x2 recovery algorithm Apr 12 20:18:55.926237 kernel: xor: automatically using best checksumming function avx Apr 12 20:18:56.004240 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 20:18:56.009195 systemd[1]: Finished dracut-pre-udev.service. Apr 12 20:18:56.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:56.017000 audit: BPF prog-id=7 op=LOAD Apr 12 20:18:56.017000 audit: BPF prog-id=8 op=LOAD Apr 12 20:18:56.019228 systemd[1]: Starting systemd-udevd.service... Apr 12 20:18:56.027323 systemd-udevd[475]: Using default interface naming scheme 'v252'. Apr 12 20:18:56.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:56.032600 systemd[1]: Started systemd-udevd.service. Apr 12 20:18:56.074362 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Apr 12 20:18:56.049964 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 20:18:56.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:56.077778 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 20:18:56.091409 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 20:18:56.143638 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 20:18:56.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:56.177241 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 20:18:56.178243 kernel: libata version 3.00 loaded. Apr 12 20:18:56.186243 kernel: ahci 0000:00:17.0: version 3.0 Apr 12 20:18:56.187381 kernel: ACPI: bus type USB registered Apr 12 20:18:56.204242 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 20:18:56.204275 kernel: usbcore: registered new interface driver usbfs Apr 12 20:18:56.204283 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Apr 12 20:18:56.204367 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 12 20:18:56.260498 kernel: scsi host0: ahci Apr 12 20:18:56.260587 kernel: usbcore: registered new interface driver hub Apr 12 20:18:56.260602 kernel: scsi host1: ahci Apr 12 20:18:56.297296 kernel: usbcore: registered new device driver usb Apr 12 20:18:56.314654 kernel: scsi host2: ahci Apr 12 20:18:56.346241 kernel: AES CTR mode by8 optimization enabled Apr 12 20:18:56.376247 kernel: scsi host3: ahci Apr 12 20:18:56.402258 kernel: scsi host4: ahci Apr 12 20:18:56.406237 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 12 20:18:56.406256 kernel: scsi host5: ahci Apr 12 20:18:56.406276 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Apr 12 20:18:56.406347 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 12 20:18:56.421283 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 12 20:18:56.468238 kernel: pps pps0: new PPS source ptp0 Apr 12 20:18:56.468315 kernel: scsi host6: ahci Apr 12 20:18:56.495916 kernel: igb 0000:03:00.0: added PHC on eth0 Apr 12 20:18:56.495994 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Apr 12 20:18:56.520424 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 12 20:18:56.520500 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Apr 12 20:18:56.536233 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:54 Apr 12 20:18:56.566685 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Apr 12 20:18:56.566789 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Apr 12 20:18:56.566804 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 12 20:18:56.581563 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Apr 12 20:18:56.652290 kernel: pps pps1: new PPS source ptp1 Apr 12 20:18:56.652364 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Apr 12 20:18:56.652372 kernel: igb 0000:04:00.0: added PHC on eth1 Apr 12 20:18:56.666285 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Apr 12 20:18:56.668117 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Apr 12 20:18:56.681088 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Apr 12 20:18:56.681162 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 12 20:18:56.699088 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Apr 12 20:18:56.715659 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:55 Apr 12 20:18:56.799295 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Apr 12 20:18:56.799370 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 12 20:18:56.927347 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Apr 12 20:18:56.960310 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Apr 12 20:18:56.960634 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 12 20:18:57.061238 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 12 20:18:57.061257 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 12 20:18:57.077236 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 12 20:18:57.093239 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 12 20:18:57.109265 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 12 20:18:57.124235 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 12 20:18:57.140263 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 12 20:18:57.156234 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 12 20:18:57.174269 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 12 20:18:57.224284 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 12 20:18:57.224302 kernel: ata2.00: Features: NCQ-prio Apr 12 20:18:57.224313 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 12 20:18:57.254711 kernel: ata1.00: Features: NCQ-prio Apr 12 20:18:57.273265 kernel: ata2.00: configured for UDMA/133 Apr 12 20:18:57.273280 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Apr 12 20:18:57.273346 kernel: ata1.00: configured for UDMA/133 Apr 12 20:18:57.307285 kernel: port_module: 9 callbacks suppressed Apr 12 20:18:57.307301 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Apr 12 20:18:57.307366 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 12 20:18:57.339235 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Apr 12 20:18:57.339322 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 12 20:18:57.413236 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Apr 12 20:18:57.413312 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 12 20:18:57.447733 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 12 20:18:57.485099 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 12 20:18:57.485173 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 12 20:18:57.485230 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 12 20:18:57.503160 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 12 20:18:57.535075 kernel: hub 1-0:1.0: USB hub found Apr 12 20:18:57.535160 kernel: hub 1-0:1.0: 16 ports detected Apr 12 20:18:57.550236 kernel: ata1.00: Enabling discard_zeroes_data Apr 12 20:18:57.550253 kernel: hub 2-0:1.0: USB hub found Apr 12 20:18:57.554296 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Apr 12 20:18:57.564805 kernel: ata2.00: Enabling discard_zeroes_data Apr 12 20:18:57.564822 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 12 20:18:57.564907 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 12 20:18:57.564969 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 12 20:18:57.565027 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 12 20:18:57.565083 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 12 20:18:57.565138 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 12 20:18:57.565194 kernel: ata1.00: Enabling discard_zeroes_data Apr 12 20:18:57.580273 kernel: hub 2-0:1.0: 10 ports detected Apr 12 20:18:57.580350 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 20:18:57.580360 kernel: GPT:9289727 != 937703087 Apr 12 20:18:57.580366 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 20:18:57.580373 kernel: GPT:9289727 != 937703087 Apr 12 20:18:57.580379 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 20:18:57.580385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 20:18:57.580392 kernel: ata1.00: Enabling discard_zeroes_data Apr 12 20:18:57.580398 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 12 20:18:57.593610 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Apr 12 20:18:57.593688 kernel: sd 1:0:0:0: [sdb] Write Protect is off Apr 12 20:18:57.717575 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Apr 12 20:18:57.717652 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 12 20:18:57.717713 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 12 20:18:57.815301 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 12 20:18:57.815328 kernel: ata2.00: Enabling discard_zeroes_data Apr 12 20:18:57.936768 kernel: ata2.00: Enabling discard_zeroes_data Apr 12 20:18:57.936785 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Apr 12 20:18:57.951275 kernel: hub 1-14:1.0: USB hub found Apr 12 20:18:57.977234 kernel: hub 1-14:1.0: 4 ports detected Apr 12 20:18:57.993270 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Apr 12 20:18:57.995142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 20:18:58.036492 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (539) Apr 12 20:18:58.036507 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Apr 12 20:18:58.013336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 20:18:58.049973 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 20:18:58.083230 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 20:18:58.123280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 20:18:58.134075 systemd[1]: Starting disk-uuid.service... Apr 12 20:18:58.169359 kernel: ata1.00: Enabling discard_zeroes_data Apr 12 20:18:58.169382 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 20:18:58.169477 disk-uuid[691]: Primary Header is updated. Apr 12 20:18:58.169477 disk-uuid[691]: Secondary Entries is updated. Apr 12 20:18:58.169477 disk-uuid[691]: Secondary Header is updated. Apr 12 20:18:58.217363 kernel: ata1.00: Enabling discard_zeroes_data Apr 12 20:18:58.217375 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 20:18:58.217382 kernel: ata1.00: Enabling discard_zeroes_data Apr 12 20:18:58.239271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 20:18:58.275239 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 12 20:18:58.398242 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 12 20:18:58.428964 kernel: usbcore: registered new interface driver usbhid Apr 12 20:18:58.428993 kernel: usbhid: USB HID core driver Apr 12 20:18:58.461344 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 12 20:18:58.576263 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 12 20:18:58.576394 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 12 20:18:58.576403 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 12 20:18:59.223280 kernel: ata1.00: Enabling discard_zeroes_data Apr 12 20:18:59.242189 disk-uuid[692]: The operation has completed successfully. Apr 12 20:18:59.250341 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 20:18:59.289015 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 20:18:59.384625 kernel: audit: type=1130 audit(1712953139.295:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.384640 kernel: audit: type=1131 audit(1712953139.295:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.289056 systemd[1]: Finished disk-uuid.service. Apr 12 20:18:59.414274 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 12 20:18:59.302380 systemd[1]: Starting verity-setup.service... Apr 12 20:18:59.446750 systemd[1]: Found device dev-mapper-usr.device. Apr 12 20:18:59.457288 systemd[1]: Mounting sysusr-usr.mount... Apr 12 20:18:59.471437 systemd[1]: Finished verity-setup.service. Apr 12 20:18:59.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.526239 kernel: audit: type=1130 audit(1712953139.478:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.554239 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 20:18:59.554369 systemd[1]: Mounted sysusr-usr.mount. Apr 12 20:18:59.561530 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 20:18:59.561923 systemd[1]: Starting ignition-setup.service... Apr 12 20:18:59.649366 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 20:18:59.649381 kernel: BTRFS info (device sda6): using free space tree Apr 12 20:18:59.649389 kernel: BTRFS info (device sda6): has skinny extents Apr 12 20:18:59.649395 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 20:18:59.568791 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 20:18:59.709214 kernel: audit: type=1130 audit(1712953139.660:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.631183 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 20:18:59.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.766237 kernel: audit: type=1130 audit(1712953139.716:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.661595 systemd[1]: Finished ignition-setup.service. Apr 12 20:18:59.774000 audit: BPF prog-id=9 op=LOAD Apr 12 20:18:59.717959 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 20:18:59.812328 kernel: audit: type=1334 audit(1712953139.774:24): prog-id=9 op=LOAD Apr 12 20:18:59.776114 systemd[1]: Starting systemd-networkd.service... Apr 12 20:18:59.812379 systemd-networkd[879]: lo: Link UP Apr 12 20:18:59.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.841258 ignition[868]: Ignition 2.14.0 Apr 12 20:18:59.892372 kernel: audit: type=1130 audit(1712953139.828:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.812382 systemd-networkd[879]: lo: Gained carrier Apr 12 20:18:59.841263 ignition[868]: Stage: fetch-offline Apr 12 20:18:59.812774 systemd-networkd[879]: Enumeration completed Apr 12 20:18:59.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.841288 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 20:19:00.050235 kernel: audit: type=1130 audit(1712953139.917:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:00.050255 kernel: audit: type=1130 audit(1712953139.975:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:00.050266 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 12 20:18:59.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.812850 systemd[1]: Started systemd-networkd.service. Apr 12 20:19:00.087374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Apr 12 20:18:59.841305 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Apr 12 20:18:59.813577 systemd-networkd[879]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 20:19:00.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.851981 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 12 20:18:59.829309 systemd[1]: Reached target network.target. Apr 12 20:19:00.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.852043 ignition[868]: parsed url from cmdline: "" Apr 12 20:19:00.136360 iscsid[906]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 20:19:00.136360 iscsid[906]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 20:19:00.136360 iscsid[906]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 20:19:00.136360 iscsid[906]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 20:19:00.136360 iscsid[906]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 20:19:00.136360 iscsid[906]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 20:19:00.136360 iscsid[906]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 20:19:00.302394 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 12 20:19:00.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:18:59.869258 unknown[868]: fetched base config from "system" Apr 12 20:18:59.852045 ignition[868]: no config URL provided Apr 12 20:18:59.869265 unknown[868]: fetched user config from "system" Apr 12 20:18:59.852048 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 20:18:59.887760 systemd[1]: Starting iscsiuio.service... Apr 12 20:18:59.852076 ignition[868]: parsing config with SHA512: 17fd3199c3b9b4afc9d8730587c1d07bc73a033580919f60e81158b29e715cc80803e7824e397a3003f729ef34ab7f02b23a7225c9009aeabedcfd71593c81b7 Apr 12 20:18:59.899526 systemd[1]: Started iscsiuio.service. Apr 12 20:18:59.869654 ignition[868]: fetch-offline: fetch-offline passed Apr 12 20:18:59.918400 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 20:18:59.869659 ignition[868]: POST message to Packet Timeline Apr 12 20:18:59.976569 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 20:18:59.869667 ignition[868]: POST Status error: resource requires networking Apr 12 20:18:59.977080 systemd[1]: Starting ignition-kargs.service... Apr 12 20:18:59.869705 ignition[868]: Ignition finished successfully Apr 12 20:19:00.051623 systemd-networkd[879]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 20:19:00.054432 ignition[896]: Ignition 2.14.0 Apr 12 20:19:00.065790 systemd[1]: Starting iscsid.service... Apr 12 20:19:00.054436 ignition[896]: Stage: kargs Apr 12 20:19:00.075594 systemd[1]: Started iscsid.service. Apr 12 20:19:00.054495 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 20:19:00.094790 systemd[1]: Starting dracut-initqueue.service... Apr 12 20:19:00.054505 ignition[896]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Apr 12 20:19:00.106481 systemd[1]: Finished dracut-initqueue.service. Apr 12 20:19:00.055806 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 12 20:19:00.129372 systemd[1]: Reached target remote-fs-pre.target. Apr 12 20:19:00.057192 ignition[896]: kargs: kargs passed Apr 12 20:19:00.144381 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 20:19:00.057196 ignition[896]: POST message to Packet Timeline Apr 12 20:19:00.152320 systemd[1]: Reached target remote-fs.target. Apr 12 20:19:00.057206 ignition[896]: GET https://metadata.packet.net/metadata: attempt #1 Apr 12 20:19:00.152806 systemd[1]: Starting dracut-pre-mount.service... Apr 12 20:19:00.059505 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59401->[::1]:53: read: connection refused Apr 12 20:19:00.170587 systemd[1]: Finished dracut-pre-mount.service. Apr 12 20:19:00.259992 ignition[896]: GET https://metadata.packet.net/metadata: attempt #2 Apr 12 20:19:00.293590 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 20:19:00.260539 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37886->[::1]:53: read: connection refused Apr 12 20:19:00.322868 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 20:19:00.353183 systemd-networkd[879]: enp1s0f1np1: Link UP Apr 12 20:19:00.353644 systemd-networkd[879]: enp1s0f1np1: Gained carrier Apr 12 20:19:00.367708 systemd-networkd[879]: enp1s0f0np0: Link UP Apr 12 20:19:00.368080 systemd-networkd[879]: eno2: Link UP Apr 12 20:19:00.368444 systemd-networkd[879]: eno1: Link UP Apr 12 20:19:00.661058 ignition[896]: GET https://metadata.packet.net/metadata: attempt #3 Apr 12 20:19:00.662303 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33675->[::1]:53: read: connection refused Apr 12 20:19:01.117893 systemd-networkd[879]: enp1s0f0np0: Gained carrier Apr 12 20:19:01.127451 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Apr 12 20:19:01.154444 systemd-networkd[879]: enp1s0f0np0: DHCPv4 address 139.178.89.23/31, gateway 139.178.89.22 acquired from 145.40.83.140 Apr 12 20:19:01.414796 systemd-networkd[879]: enp1s0f1np1: Gained IPv6LL Apr 12 20:19:01.462836 ignition[896]: GET https://metadata.packet.net/metadata: attempt #4 Apr 12 20:19:01.464123 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37884->[::1]:53: read: connection refused Apr 12 20:19:02.566486 systemd-networkd[879]: enp1s0f0np0: Gained IPv6LL Apr 12 20:19:03.065380 ignition[896]: GET https://metadata.packet.net/metadata: attempt #5 Apr 12 20:19:03.066891 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52243->[::1]:53: read: connection refused Apr 12 20:19:06.270283 ignition[896]: GET https://metadata.packet.net/metadata: attempt #6 Apr 12 20:19:06.306526 ignition[896]: GET result: OK Apr 12 20:19:06.500700 ignition[896]: Ignition finished successfully Apr 12 20:19:06.504962 systemd[1]: Finished ignition-kargs.service. Apr 12 20:19:06.591728 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 12 20:19:06.591747 kernel: audit: type=1130 audit(1712953146.514:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:06.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:06.524762 ignition[922]: Ignition 2.14.0 Apr 12 20:19:06.517646 systemd[1]: Starting ignition-disks.service... Apr 12 20:19:06.524765 ignition[922]: Stage: disks Apr 12 20:19:06.524841 ignition[922]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 20:19:06.524850 ignition[922]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Apr 12 20:19:06.526266 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 12 20:19:06.527984 ignition[922]: disks: disks passed Apr 12 20:19:06.527987 ignition[922]: POST message to Packet Timeline Apr 12 20:19:06.527999 ignition[922]: GET https://metadata.packet.net/metadata: attempt #1 Apr 12 20:19:06.550952 ignition[922]: GET result: OK Apr 12 20:19:06.728782 ignition[922]: Ignition finished successfully Apr 12 20:19:06.731729 systemd[1]: Finished ignition-disks.service. Apr 12 20:19:06.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:06.745784 systemd[1]: Reached target initrd-root-device.target. Apr 12 20:19:06.824514 kernel: audit: type=1130 audit(1712953146.744:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:06.810455 systemd[1]: Reached target local-fs-pre.target. Apr 12 20:19:06.810493 systemd[1]: Reached target local-fs.target. Apr 12 20:19:06.833467 systemd[1]: Reached target sysinit.target. Apr 12 20:19:06.847458 systemd[1]: Reached target basic.target. Apr 12 20:19:06.861141 systemd[1]: Starting systemd-fsck-root.service... Apr 12 20:19:06.879852 systemd-fsck[938]: ROOT: clean, 612/553520 files, 56019/553472 blocks Apr 12 20:19:06.893557 systemd[1]: Finished systemd-fsck-root.service. Apr 12 20:19:06.986939 kernel: audit: type=1130 audit(1712953146.900:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:06.986953 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 20:19:06.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:06.907156 systemd[1]: Mounting sysroot.mount... Apr 12 20:19:06.994929 systemd[1]: Mounted sysroot.mount. Apr 12 20:19:07.008541 systemd[1]: Reached target initrd-root-fs.target. Apr 12 20:19:07.016256 systemd[1]: Mounting sysroot-usr.mount... Apr 12 20:19:07.041047 systemd[1]: Starting flatcar-metadata-hostname.service... Apr 12 20:19:07.049761 systemd[1]: Starting flatcar-static-network.service... Apr 12 20:19:07.065346 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 20:19:07.065394 systemd[1]: Reached target ignition-diskful.target. Apr 12 20:19:07.083389 systemd[1]: Mounted sysroot-usr.mount. Apr 12 20:19:07.108316 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 20:19:07.121094 systemd[1]: Starting initrd-setup-root.service... Apr 12 20:19:07.260659 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (949) Apr 12 20:19:07.260675 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 20:19:07.260684 kernel: BTRFS info (device sda6): using free space tree Apr 12 20:19:07.260691 kernel: BTRFS info (device sda6): has skinny extents Apr 12 20:19:07.260700 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 20:19:07.260712 initrd-setup-root[956]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 20:19:07.324393 kernel: audit: type=1130 audit(1712953147.268:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.324508 coreos-metadata[945]: Apr 12 20:19:07.211 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 12 20:19:07.324508 coreos-metadata[945]: Apr 12 20:19:07.233 INFO Fetch successful Apr 12 20:19:07.324508 coreos-metadata[945]: Apr 12 20:19:07.252 INFO wrote hostname ci-3510.3.3-a-3fbc403199 to /sysroot/etc/hostname Apr 12 20:19:07.532494 kernel: audit: type=1130 audit(1712953147.332:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.532511 kernel: audit: type=1130 audit(1712953147.397:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.532519 kernel: audit: type=1131 audit(1712953147.397:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.187919 systemd[1]: Finished initrd-setup-root.service. Apr 12 20:19:07.547500 coreos-metadata[946]: Apr 12 20:19:07.211 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 12 20:19:07.547500 coreos-metadata[946]: Apr 12 20:19:07.233 INFO Fetch successful Apr 12 20:19:07.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.602430 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Apr 12 20:19:07.643462 kernel: audit: type=1130 audit(1712953147.573:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.270614 systemd[1]: Finished flatcar-metadata-hostname.service. Apr 12 20:19:07.652485 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 20:19:07.333559 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 12 20:19:07.673424 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 20:19:07.333599 systemd[1]: Finished flatcar-static-network.service. Apr 12 20:19:07.692533 ignition[1021]: INFO : Ignition 2.14.0 Apr 12 20:19:07.692533 ignition[1021]: INFO : Stage: mount Apr 12 20:19:07.692533 ignition[1021]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 20:19:07.692533 ignition[1021]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Apr 12 20:19:07.692533 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 12 20:19:07.692533 ignition[1021]: INFO : mount: mount passed Apr 12 20:19:07.692533 ignition[1021]: INFO : POST message to Packet Timeline Apr 12 20:19:07.692533 ignition[1021]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 12 20:19:07.692533 ignition[1021]: INFO : GET result: OK Apr 12 20:19:07.419136 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 20:19:07.519908 systemd[1]: Starting ignition-mount.service... Apr 12 20:19:07.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.854280 kernel: audit: type=1130 audit(1712953147.795:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:07.854295 ignition[1021]: INFO : Ignition finished successfully Apr 12 20:19:07.539877 systemd[1]: Starting sysroot-boot.service... Apr 12 20:19:07.554739 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 20:19:07.916333 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1036) Apr 12 20:19:07.916343 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 20:19:07.554785 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 20:19:07.557909 systemd[1]: Finished sysroot-boot.service. Apr 12 20:19:07.985323 kernel: BTRFS info (device sda6): using free space tree Apr 12 20:19:07.985334 kernel: BTRFS info (device sda6): has skinny extents Apr 12 20:19:07.985342 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 20:19:07.784209 systemd[1]: Finished ignition-mount.service. Apr 12 20:19:07.798405 systemd[1]: Starting ignition-files.service... Apr 12 20:19:07.864082 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 20:19:08.000736 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 20:19:08.032794 unknown[1055]: wrote ssh authorized keys file for user: core Apr 12 20:19:08.045450 ignition[1055]: INFO : Ignition 2.14.0 Apr 12 20:19:08.045450 ignition[1055]: INFO : Stage: files Apr 12 20:19:08.045450 ignition[1055]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 20:19:08.045450 ignition[1055]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Apr 12 20:19:08.045450 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 12 20:19:08.045450 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Apr 12 20:19:08.045450 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 20:19:08.045450 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 20:19:08.045450 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 20:19:08.045450 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 20:19:08.045450 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 20:19:08.045450 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 20:19:08.045450 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 20:19:08.205452 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 20:19:08.205452 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 20:19:08.205452 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 20:19:08.205452 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 20:19:08.658738 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 20:19:08.762365 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 20:19:08.788507 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 20:19:08.788507 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 20:19:08.788507 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 20:19:09.184665 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 20:19:09.235503 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 20:19:09.260491 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 20:19:09.260491 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 20:19:09.260491 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Apr 12 20:19:09.483547 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 12 20:19:16.866020 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Apr 12 20:19:16.891580 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 20:19:16.891580 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 20:19:16.891580 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Apr 12 20:19:17.196444 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 20:19:36.451398 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Apr 12 20:19:36.477495 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 20:19:36.477495 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 20:19:36.477495 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Apr 12 20:19:36.710802 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 20:19:44.671243 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Apr 12 20:19:44.671243 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 20:19:44.711531 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 20:19:44.711531 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 20:19:44.711531 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 20:19:44.711531 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 20:19:45.139955 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 20:19:45.191334 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Apr 12 20:19:45.207636 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 20:19:45.432548 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1075) Apr 12 20:19:45.432651 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2862891550" Apr 12 20:19:45.432651 ignition[1055]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2862891550": device or resource busy Apr 12 20:19:45.432651 ignition[1055]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2862891550", trying btrfs: device or resource busy Apr 12 20:19:45.432651 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2862891550" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2862891550" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2862891550" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2862891550" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Apr 12 20:19:45.432651 ignition[1055]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Apr 12 20:19:45.941478 kernel: audit: type=1130 audit(1712953185.611:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.941494 kernel: audit: type=1130 audit(1712953185.739:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.941502 kernel: audit: type=1130 audit(1712953185.807:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.941512 kernel: audit: type=1131 audit(1712953185.807:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1e): [started] setting preset to enabled for "packet-phone-home.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1e): [finished] setting preset to enabled for "packet-phone-home.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 20:19:45.941591 ignition[1055]: INFO : files: files passed Apr 12 20:19:46.549642 kernel: audit: type=1130 audit(1712953185.988:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.549732 kernel: audit: type=1131 audit(1712953185.988:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.549779 kernel: audit: type=1130 audit(1712953186.168:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.549821 kernel: audit: type=1131 audit(1712953186.331:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.600653 systemd[1]: Finished ignition-files.service. Apr 12 20:19:46.564710 ignition[1055]: INFO : POST message to Packet Timeline Apr 12 20:19:46.564710 ignition[1055]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 12 20:19:46.564710 ignition[1055]: INFO : GET result: OK Apr 12 20:19:46.564710 ignition[1055]: INFO : Ignition finished successfully Apr 12 20:19:46.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.619373 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 20:19:46.709569 kernel: audit: type=1131 audit(1712953186.619:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.709618 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 20:19:46.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.681504 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 20:19:46.804595 kernel: audit: type=1131 audit(1712953186.716:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.681900 systemd[1]: Starting ignition-quench.service... Apr 12 20:19:45.711629 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 20:19:45.740787 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 20:19:45.740876 systemd[1]: Finished ignition-quench.service. Apr 12 20:19:45.808519 systemd[1]: Reached target ignition-complete.target. Apr 12 20:19:45.930924 systemd[1]: Starting initrd-parse-etc.service... Apr 12 20:19:46.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.953476 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 20:19:46.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.953516 systemd[1]: Finished initrd-parse-etc.service. Apr 12 20:19:46.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:45.989472 systemd[1]: Reached target initrd-fs.target. Apr 12 20:19:46.945304 ignition[1105]: INFO : Ignition 2.14.0 Apr 12 20:19:46.945304 ignition[1105]: INFO : Stage: umount Apr 12 20:19:46.945304 ignition[1105]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 20:19:46.945304 ignition[1105]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Apr 12 20:19:46.945304 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 12 20:19:46.945304 ignition[1105]: INFO : umount: umount passed Apr 12 20:19:46.945304 ignition[1105]: INFO : POST message to Packet Timeline Apr 12 20:19:46.945304 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 12 20:19:46.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:47.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:47.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:47.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.111472 systemd[1]: Reached target initrd.target. Apr 12 20:19:47.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:47.087690 ignition[1105]: INFO : GET result: OK Apr 12 20:19:46.111562 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 20:19:46.111929 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 20:19:46.152653 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 20:19:46.170218 systemd[1]: Starting initrd-cleanup.service... Apr 12 20:19:47.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.239345 systemd[1]: Stopped target nss-lookup.target. Apr 12 20:19:47.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.268596 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 20:19:47.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:47.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:47.180000 audit: BPF prog-id=6 op=UNLOAD Apr 12 20:19:47.195517 ignition[1105]: INFO : Ignition finished successfully Apr 12 20:19:46.282673 systemd[1]: Stopped target timers.target. Apr 12 20:19:46.311679 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 20:19:47.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.311881 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 20:19:47.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.333162 systemd[1]: Stopped target initrd.target. Apr 12 20:19:47.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.409573 systemd[1]: Stopped target basic.target. Apr 12 20:19:47.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.423639 systemd[1]: Stopped target ignition-complete.target. Apr 12 20:19:47.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.443668 systemd[1]: Stopped target ignition-diskful.target. Apr 12 20:19:47.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.473722 systemd[1]: Stopped target initrd-root-device.target. Apr 12 20:19:47.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.494881 systemd[1]: Stopped target remote-fs.target. Apr 12 20:19:47.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.518867 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 20:19:46.542918 systemd[1]: Stopped target sysinit.target. Apr 12 20:19:47.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.557821 systemd[1]: Stopped target local-fs.target. Apr 12 20:19:46.572906 systemd[1]: Stopped target local-fs-pre.target. Apr 12 20:19:46.588874 systemd[1]: Stopped target swap.target. Apr 12 20:19:47.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.605881 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 20:19:47.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.606270 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 20:19:47.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.621097 systemd[1]: Stopped target cryptsetup.target. Apr 12 20:19:46.699523 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 20:19:47.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.699596 systemd[1]: Stopped dracut-initqueue.service. Apr 12 20:19:47.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:47.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.717608 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 20:19:46.717683 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 20:19:46.786602 systemd[1]: Stopped target paths.target. Apr 12 20:19:46.811568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 20:19:46.815463 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 20:19:46.826540 systemd[1]: Stopped target slices.target. Apr 12 20:19:46.841570 systemd[1]: Stopped target sockets.target. Apr 12 20:19:47.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:46.858608 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 20:19:46.858721 systemd[1]: Closed iscsid.socket. Apr 12 20:19:46.872906 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 20:19:46.873152 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 20:19:46.891992 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 20:19:46.892370 systemd[1]: Stopped ignition-files.service. Apr 12 20:19:46.908009 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 12 20:19:46.908394 systemd[1]: Stopped flatcar-metadata-hostname.service. Apr 12 20:19:47.682862 iscsid[906]: iscsid shutting down. Apr 12 20:19:46.926142 systemd[1]: Stopping ignition-mount.service... Apr 12 20:19:46.937474 systemd[1]: Stopping iscsiuio.service... Apr 12 20:19:46.952432 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 20:19:46.952575 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 20:19:46.973242 systemd[1]: Stopping sysroot-boot.service... Apr 12 20:19:46.992431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 20:19:46.992787 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 20:19:47.022038 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 20:19:47.022426 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 20:19:47.049305 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 20:19:47.051585 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 20:19:47.051826 systemd[1]: Stopped iscsiuio.service. Apr 12 20:19:47.064908 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 20:19:47.065144 systemd[1]: Stopped sysroot-boot.service. Apr 12 20:19:47.082084 systemd[1]: Stopped target network.target. Apr 12 20:19:47.095605 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 20:19:47.095712 systemd[1]: Closed iscsiuio.socket. Apr 12 20:19:47.683237 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Apr 12 20:19:47.110839 systemd[1]: Stopping systemd-networkd.service... Apr 12 20:19:47.117389 systemd-networkd[879]: enp1s0f1np1: DHCPv6 lease lost Apr 12 20:19:47.128815 systemd[1]: Stopping systemd-resolved.service... Apr 12 20:19:47.129449 systemd-networkd[879]: enp1s0f0np0: DHCPv6 lease lost Apr 12 20:19:47.144474 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 20:19:47.682000 audit: BPF prog-id=9 op=UNLOAD Apr 12 20:19:47.144720 systemd[1]: Stopped systemd-resolved.service. Apr 12 20:19:47.159889 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 20:19:47.160260 systemd[1]: Stopped systemd-networkd.service. Apr 12 20:19:47.175374 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 20:19:47.175420 systemd[1]: Finished initrd-cleanup.service. Apr 12 20:19:47.181824 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 20:19:47.181841 systemd[1]: Closed systemd-networkd.socket. Apr 12 20:19:47.204846 systemd[1]: Stopping network-cleanup.service... Apr 12 20:19:47.219410 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 20:19:47.219488 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 20:19:47.234555 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 20:19:47.234648 systemd[1]: Stopped systemd-sysctl.service. Apr 12 20:19:47.252967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 20:19:47.253116 systemd[1]: Stopped systemd-modules-load.service. Apr 12 20:19:47.272469 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 20:19:47.273098 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 20:19:47.273146 systemd[1]: Stopped ignition-mount.service. Apr 12 20:19:47.285965 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 20:19:47.286006 systemd[1]: Stopped ignition-disks.service. Apr 12 20:19:47.301448 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 20:19:47.301490 systemd[1]: Stopped ignition-kargs.service. Apr 12 20:19:47.318509 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 20:19:47.318577 systemd[1]: Stopped ignition-setup.service. Apr 12 20:19:47.335442 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 20:19:47.335493 systemd[1]: Stopped initrd-setup-root.service. Apr 12 20:19:47.351620 systemd[1]: Stopping systemd-udevd.service... Apr 12 20:19:47.365893 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 20:19:47.366080 systemd[1]: Stopped systemd-udevd.service. Apr 12 20:19:47.382727 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 20:19:47.382843 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 20:19:47.396536 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 20:19:47.396641 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 20:19:47.411486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 20:19:47.411619 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 20:19:47.427566 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 20:19:47.427691 systemd[1]: Stopped dracut-cmdline.service. Apr 12 20:19:47.442547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 20:19:47.442677 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 20:19:47.460341 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 20:19:47.473425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 20:19:47.473585 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 20:19:47.489458 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 20:19:47.489694 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 20:19:47.561146 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 20:19:47.561410 systemd[1]: Stopped network-cleanup.service. Apr 12 20:19:47.572865 systemd[1]: Reached target initrd-switch-root.target. Apr 12 20:19:47.591124 systemd[1]: Starting initrd-switch-root.service... Apr 12 20:19:47.629651 systemd[1]: Switching root. Apr 12 20:19:47.684276 systemd-journald[267]: Journal stopped Apr 12 20:19:51.679598 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 20:19:51.679611 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 20:19:51.679619 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 20:19:51.679625 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 20:19:51.679630 kernel: SELinux: policy capability open_perms=1 Apr 12 20:19:51.679635 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 20:19:51.679641 kernel: SELinux: policy capability always_check_network=0 Apr 12 20:19:51.679646 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 20:19:51.679651 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 20:19:51.679657 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 20:19:51.679663 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 20:19:51.679669 systemd[1]: Successfully loaded SELinux policy in 322.092ms. Apr 12 20:19:51.679675 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.275ms. Apr 12 20:19:51.679682 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 20:19:51.679690 systemd[1]: Detected architecture x86-64. Apr 12 20:19:51.679695 systemd[1]: Detected first boot. Apr 12 20:19:51.679701 systemd[1]: Hostname set to . Apr 12 20:19:51.679707 systemd[1]: Initializing machine ID from random generator. Apr 12 20:19:51.679713 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 20:19:51.679719 systemd[1]: Populated /etc with preset unit settings. Apr 12 20:19:51.679725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 20:19:51.679732 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 20:19:51.679739 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 20:19:51.679746 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 20:19:51.679751 systemd[1]: Stopped iscsid.service. Apr 12 20:19:51.679757 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 20:19:51.679764 systemd[1]: Stopped initrd-switch-root.service. Apr 12 20:19:51.679770 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 20:19:51.679777 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 20:19:51.679783 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 20:19:51.679789 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Apr 12 20:19:51.679795 systemd[1]: Created slice system-getty.slice. Apr 12 20:19:51.679801 systemd[1]: Created slice system-modprobe.slice. Apr 12 20:19:51.679824 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 20:19:51.679844 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 20:19:51.679851 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 20:19:51.679858 systemd[1]: Created slice user.slice. Apr 12 20:19:51.679864 systemd[1]: Started systemd-ask-password-console.path. Apr 12 20:19:51.679870 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 20:19:51.679876 systemd[1]: Set up automount boot.automount. Apr 12 20:19:51.679883 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 20:19:51.679890 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 20:19:51.679896 systemd[1]: Stopped target initrd-fs.target. Apr 12 20:19:51.679902 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 20:19:51.679930 systemd[1]: Reached target integritysetup.target. Apr 12 20:19:51.679952 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 20:19:51.679958 systemd[1]: Reached target remote-fs.target. Apr 12 20:19:51.679964 systemd[1]: Reached target slices.target. Apr 12 20:19:51.679971 systemd[1]: Reached target swap.target. Apr 12 20:19:51.679977 systemd[1]: Reached target torcx.target. Apr 12 20:19:51.680008 systemd[1]: Reached target veritysetup.target. Apr 12 20:19:51.680014 systemd[1]: Listening on systemd-coredump.socket. Apr 12 20:19:51.680036 systemd[1]: Listening on systemd-initctl.socket. Apr 12 20:19:51.680043 systemd[1]: Listening on systemd-networkd.socket. Apr 12 20:19:51.680049 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 20:19:51.680055 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 20:19:51.680062 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 20:19:51.680068 systemd[1]: Mounting dev-hugepages.mount... Apr 12 20:19:51.680075 systemd[1]: Mounting dev-mqueue.mount... Apr 12 20:19:51.680082 systemd[1]: Mounting media.mount... Apr 12 20:19:51.680088 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 20:19:51.680094 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 20:19:51.680101 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 20:19:51.680107 systemd[1]: Mounting tmp.mount... Apr 12 20:19:51.680113 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 20:19:51.680120 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 20:19:51.680126 systemd[1]: Starting kmod-static-nodes.service... Apr 12 20:19:51.680133 systemd[1]: Starting modprobe@configfs.service... Apr 12 20:19:51.680140 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 20:19:51.680146 systemd[1]: Starting modprobe@drm.service... Apr 12 20:19:51.680152 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 20:19:51.680159 systemd[1]: Starting modprobe@fuse.service... Apr 12 20:19:51.680165 kernel: fuse: init (API version 7.34) Apr 12 20:19:51.680171 systemd[1]: Starting modprobe@loop.service... Apr 12 20:19:51.680177 kernel: loop: module loaded Apr 12 20:19:51.680184 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 20:19:51.680191 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 20:19:51.680197 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 20:19:51.680204 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 20:19:51.680210 kernel: kauditd_printk_skb: 71 callbacks suppressed Apr 12 20:19:51.680216 kernel: audit: type=1131 audit(1712953191.320:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.680222 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 20:19:51.680229 kernel: audit: type=1131 audit(1712953191.408:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.680254 systemd[1]: Stopped systemd-journald.service. Apr 12 20:19:51.680261 kernel: audit: type=1130 audit(1712953191.472:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.680283 kernel: audit: type=1131 audit(1712953191.472:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.680289 kernel: audit: type=1334 audit(1712953191.557:118): prog-id=21 op=LOAD Apr 12 20:19:51.680295 kernel: audit: type=1334 audit(1712953191.576:119): prog-id=22 op=LOAD Apr 12 20:19:51.680301 kernel: audit: type=1334 audit(1712953191.594:120): prog-id=23 op=LOAD Apr 12 20:19:51.680306 kernel: audit: type=1334 audit(1712953191.612:121): prog-id=19 op=UNLOAD Apr 12 20:19:51.680312 systemd[1]: Starting systemd-journald.service... Apr 12 20:19:51.680319 kernel: audit: type=1334 audit(1712953191.612:122): prog-id=20 op=UNLOAD Apr 12 20:19:51.680325 systemd[1]: Starting systemd-modules-load.service... Apr 12 20:19:51.680331 kernel: audit: type=1305 audit(1712953191.674:123): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 20:19:51.680340 systemd-journald[1259]: Journal started Apr 12 20:19:51.680364 systemd-journald[1259]: Runtime Journal (/run/log/journal/30f187e3a2914ac5b9cf250a2cd4b007) is 8.0M, max 640.1M, 632.1M free. Apr 12 20:19:48.097000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 20:19:48.369000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 20:19:48.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 20:19:48.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 20:19:48.374000 audit: BPF prog-id=10 op=LOAD Apr 12 20:19:48.374000 audit: BPF prog-id=10 op=UNLOAD Apr 12 20:19:48.375000 audit: BPF prog-id=11 op=LOAD Apr 12 20:19:48.375000 audit: BPF prog-id=11 op=UNLOAD Apr 12 20:19:48.471000 audit[1148]: AVC avc: denied { associate } for pid=1148 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 20:19:48.471000 audit[1148]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d98a2 a1=c00015adf8 a2=c0001630c0 a3=32 items=0 ppid=1131 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 20:19:48.471000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 20:19:48.496000 audit[1148]: AVC avc: denied { associate } for pid=1148 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 20:19:48.496000 audit[1148]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9979 a2=1ed a3=0 items=2 ppid=1131 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 20:19:48.496000 audit: CWD cwd="/" Apr 12 20:19:48.496000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:48.496000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:48.496000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 20:19:50.023000 audit: BPF prog-id=12 op=LOAD Apr 12 20:19:50.023000 audit: BPF prog-id=3 op=UNLOAD Apr 12 20:19:50.023000 audit: BPF prog-id=13 op=LOAD Apr 12 20:19:50.023000 audit: BPF prog-id=14 op=LOAD Apr 12 20:19:50.023000 audit: BPF prog-id=4 op=UNLOAD Apr 12 20:19:50.023000 audit: BPF prog-id=5 op=UNLOAD Apr 12 20:19:50.023000 audit: BPF prog-id=15 op=LOAD Apr 12 20:19:50.024000 audit: BPF prog-id=12 op=UNLOAD Apr 12 20:19:50.024000 audit: BPF prog-id=16 op=LOAD Apr 12 20:19:50.024000 audit: BPF prog-id=17 op=LOAD Apr 12 20:19:50.024000 audit: BPF prog-id=13 op=UNLOAD Apr 12 20:19:50.024000 audit: BPF prog-id=14 op=UNLOAD Apr 12 20:19:50.024000 audit: BPF prog-id=18 op=LOAD Apr 12 20:19:50.024000 audit: BPF prog-id=15 op=UNLOAD Apr 12 20:19:50.025000 audit: BPF prog-id=19 op=LOAD Apr 12 20:19:50.025000 audit: BPF prog-id=20 op=LOAD Apr 12 20:19:50.025000 audit: BPF prog-id=16 op=UNLOAD Apr 12 20:19:50.025000 audit: BPF prog-id=17 op=UNLOAD Apr 12 20:19:50.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:50.071000 audit: BPF prog-id=18 op=UNLOAD Apr 12 20:19:50.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:50.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:50.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.557000 audit: BPF prog-id=21 op=LOAD Apr 12 20:19:51.576000 audit: BPF prog-id=22 op=LOAD Apr 12 20:19:51.594000 audit: BPF prog-id=23 op=LOAD Apr 12 20:19:51.612000 audit: BPF prog-id=19 op=UNLOAD Apr 12 20:19:51.612000 audit: BPF prog-id=20 op=UNLOAD Apr 12 20:19:51.674000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 20:19:50.022707 systemd[1]: Queued start job for default target multi-user.target. Apr 12 20:19:48.469502 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 20:19:50.026657 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 20:19:48.469999 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 20:19:48.470010 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 20:19:48.470029 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 20:19:48.470034 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 20:19:48.470050 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 20:19:48.470058 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 20:19:48.470165 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 20:19:48.470187 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 20:19:48.470195 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 20:19:48.471592 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 20:19:48.471611 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 20:19:48.471621 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 20:19:48.471630 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 20:19:48.471640 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 20:19:48.471647 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 20:19:49.663953 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:49Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 20:19:49.664107 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:49Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 20:19:49.664505 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:49Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 20:19:49.664637 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:49Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 20:19:49.664683 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:49Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 20:19:49.664747 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-04-12T20:19:49Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 20:19:51.674000 audit[1259]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffce705ee10 a2=4000 a3=7ffce705eeac items=0 ppid=1 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 20:19:51.674000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 20:19:51.759291 systemd[1]: Starting systemd-network-generator.service... Apr 12 20:19:51.786299 systemd[1]: Starting systemd-remount-fs.service... Apr 12 20:19:51.813280 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 20:19:51.856044 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 20:19:51.856066 systemd[1]: Stopped verity-setup.service. Apr 12 20:19:51.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.901275 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 20:19:51.921440 systemd[1]: Started systemd-journald.service. Apr 12 20:19:51.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.929899 systemd[1]: Mounted dev-hugepages.mount. Apr 12 20:19:51.937516 systemd[1]: Mounted dev-mqueue.mount. Apr 12 20:19:51.944515 systemd[1]: Mounted media.mount. Apr 12 20:19:51.951508 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 20:19:51.960500 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 20:19:51.969487 systemd[1]: Mounted tmp.mount. Apr 12 20:19:51.976584 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 20:19:51.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.986619 systemd[1]: Finished kmod-static-nodes.service. Apr 12 20:19:51.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:51.996644 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 20:19:51.996775 systemd[1]: Finished modprobe@configfs.service. Apr 12 20:19:52.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.006742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 20:19:52.006916 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 20:19:52.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.015973 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 20:19:52.016224 systemd[1]: Finished modprobe@drm.service. Apr 12 20:19:52.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.025092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 20:19:52.025423 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 20:19:52.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.034145 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 20:19:52.034533 systemd[1]: Finished modprobe@fuse.service. Apr 12 20:19:52.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.043094 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 20:19:52.043425 systemd[1]: Finished modprobe@loop.service. Apr 12 20:19:52.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.052108 systemd[1]: Finished systemd-modules-load.service. Apr 12 20:19:52.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.061095 systemd[1]: Finished systemd-network-generator.service. Apr 12 20:19:52.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.070082 systemd[1]: Finished systemd-remount-fs.service. Apr 12 20:19:52.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.079175 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 20:19:52.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.088706 systemd[1]: Reached target network-pre.target. Apr 12 20:19:52.100308 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 20:19:52.108985 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 20:19:52.116451 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 20:19:52.117513 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 20:19:52.124916 systemd[1]: Starting systemd-journal-flush.service... Apr 12 20:19:52.128976 systemd-journald[1259]: Time spent on flushing to /var/log/journal/30f187e3a2914ac5b9cf250a2cd4b007 is 16.300ms for 1628 entries. Apr 12 20:19:52.128976 systemd-journald[1259]: System Journal (/var/log/journal/30f187e3a2914ac5b9cf250a2cd4b007) is 8.0M, max 195.6M, 187.6M free. Apr 12 20:19:52.176104 systemd-journald[1259]: Received client request to flush runtime journal. Apr 12 20:19:52.141354 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 20:19:52.142009 systemd[1]: Starting systemd-random-seed.service... Apr 12 20:19:52.156353 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 20:19:52.156963 systemd[1]: Starting systemd-sysctl.service... Apr 12 20:19:52.163829 systemd[1]: Starting systemd-sysusers.service... Apr 12 20:19:52.170848 systemd[1]: Starting systemd-udev-settle.service... Apr 12 20:19:52.178411 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 20:19:52.186387 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 20:19:52.194472 systemd[1]: Finished systemd-journal-flush.service. Apr 12 20:19:52.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.202485 systemd[1]: Finished systemd-random-seed.service. Apr 12 20:19:52.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.210438 systemd[1]: Finished systemd-sysctl.service. Apr 12 20:19:52.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.218438 systemd[1]: Finished systemd-sysusers.service. Apr 12 20:19:52.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.227408 systemd[1]: Reached target first-boot-complete.target. Apr 12 20:19:52.235575 udevadm[1275]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 12 20:19:52.425227 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 20:19:52.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.432000 audit: BPF prog-id=24 op=LOAD Apr 12 20:19:52.432000 audit: BPF prog-id=25 op=LOAD Apr 12 20:19:52.432000 audit: BPF prog-id=7 op=UNLOAD Apr 12 20:19:52.432000 audit: BPF prog-id=8 op=UNLOAD Apr 12 20:19:52.434536 systemd[1]: Starting systemd-udevd.service... Apr 12 20:19:52.445972 systemd-udevd[1276]: Using default interface naming scheme 'v252'. Apr 12 20:19:52.465226 systemd[1]: Started systemd-udevd.service. Apr 12 20:19:52.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.475754 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Apr 12 20:19:52.475000 audit: BPF prog-id=26 op=LOAD Apr 12 20:19:52.477112 systemd[1]: Starting systemd-networkd.service... Apr 12 20:19:52.507000 audit: BPF prog-id=27 op=LOAD Apr 12 20:19:52.509263 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Apr 12 20:19:52.509301 kernel: ACPI: button: Sleep Button [SLPB] Apr 12 20:19:52.525000 audit: BPF prog-id=28 op=LOAD Apr 12 20:19:52.525000 audit: BPF prog-id=29 op=LOAD Apr 12 20:19:52.526785 systemd[1]: Starting systemd-userdbd.service... Apr 12 20:19:52.547512 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 12 20:19:52.570285 kernel: ACPI: button: Power Button [PWRF] Apr 12 20:19:52.564000 audit[1343]: AVC avc: denied { confidentiality } for pid=1343 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 20:19:52.597627 systemd[1]: Started systemd-userdbd.service. Apr 12 20:19:52.599236 kernel: IPMI message handler: version 39.2 Apr 12 20:19:52.599268 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 20:19:52.564000 audit[1343]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d0fd778be0 a1=4d8bc a2=7f006bd51bc5 a3=5 items=42 ppid=1276 pid=1343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 20:19:52.564000 audit: CWD cwd="/" Apr 12 20:19:52.564000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=1 name=(null) inode=20053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=2 name=(null) inode=20053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=3 name=(null) inode=20054 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=4 name=(null) inode=20053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=5 name=(null) inode=20055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=6 name=(null) inode=20053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=7 name=(null) inode=20056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=8 name=(null) inode=20056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=9 name=(null) inode=20057 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=10 name=(null) inode=20056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=11 name=(null) inode=20058 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=12 name=(null) inode=20056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=13 name=(null) inode=20059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=14 name=(null) inode=20056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=15 name=(null) inode=20060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=16 name=(null) inode=20056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=17 name=(null) inode=20061 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=18 name=(null) inode=20053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=19 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=20 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=21 name=(null) inode=20063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=22 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=23 name=(null) inode=20064 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=24 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=25 name=(null) inode=20065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=26 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=27 name=(null) inode=20066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=28 name=(null) inode=20062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=29 name=(null) inode=20067 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=30 name=(null) inode=20053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=31 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=32 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=33 name=(null) inode=20069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=34 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=35 name=(null) inode=20070 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=36 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=37 name=(null) inode=20071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=38 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=39 name=(null) inode=20072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=40 name=(null) inode=20068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PATH item=41 name=(null) inode=20073 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 20:19:52.564000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 20:19:52.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:52.660243 kernel: ipmi device interface Apr 12 20:19:52.661241 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Apr 12 20:19:52.661361 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Apr 12 20:19:52.661442 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Apr 12 20:19:52.739926 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Apr 12 20:19:52.740116 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1284) Apr 12 20:19:52.765237 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Apr 12 20:19:52.791592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 20:19:52.825173 kernel: ipmi_si: IPMI System Interface driver Apr 12 20:19:52.825205 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Apr 12 20:19:52.825305 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Apr 12 20:19:52.825317 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Apr 12 20:19:52.825328 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Apr 12 20:19:52.907238 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Apr 12 20:19:52.932245 kernel: iTCO_vendor_support: vendor-support=0 Apr 12 20:19:52.932298 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Apr 12 20:19:52.971840 kernel: ipmi_si: Adding ACPI-specified kcs state machine Apr 12 20:19:52.971910 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Apr 12 20:19:52.972135 systemd-networkd[1307]: bond0: netdev ready Apr 12 20:19:52.974552 systemd-networkd[1307]: lo: Link UP Apr 12 20:19:52.974555 systemd-networkd[1307]: lo: Gained carrier Apr 12 20:19:52.975113 systemd-networkd[1307]: Enumeration completed Apr 12 20:19:52.975210 systemd[1]: Started systemd-networkd.service. Apr 12 20:19:52.975452 systemd-networkd[1307]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Apr 12 20:19:52.976236 systemd-networkd[1307]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f9:a5.network. Apr 12 20:19:53.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:53.037166 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Apr 12 20:19:53.037269 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Apr 12 20:19:53.037332 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Apr 12 20:19:53.097241 kernel: intel_rapl_common: Found RAPL domain package Apr 12 20:19:53.097340 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Apr 12 20:19:53.097435 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 12 20:19:53.100276 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Apr 12 20:19:53.101101 systemd-networkd[1307]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f9:a4.network. Apr 12 20:19:53.128978 kernel: intel_rapl_common: Found RAPL domain core Apr 12 20:19:53.204577 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 12 20:19:53.204612 kernel: intel_rapl_common: Found RAPL domain dram Apr 12 20:19:53.222314 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Apr 12 20:19:53.265237 kernel: ipmi_ssif: IPMI SSIF Interface driver Apr 12 20:19:53.265272 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 12 20:19:53.305297 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Apr 12 20:19:53.305324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Apr 12 20:19:53.323283 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 12 20:19:53.344654 systemd-networkd[1307]: bond0: Link UP Apr 12 20:19:53.344852 systemd-networkd[1307]: enp1s0f1np1: Link UP Apr 12 20:19:53.344984 systemd-networkd[1307]: enp1s0f1np1: Gained carrier Apr 12 20:19:53.345946 systemd-networkd[1307]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f9:a4.network. Apr 12 20:19:53.384288 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Apr 12 20:19:53.384317 kernel: bond0: active interface up! Apr 12 20:19:53.406265 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Apr 12 20:19:53.468278 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 12 20:19:53.532283 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Apr 12 20:19:53.554293 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Apr 12 20:19:53.554413 systemd-networkd[1307]: enp1s0f0np0: Link UP Apr 12 20:19:53.554574 systemd-networkd[1307]: bond0: Gained carrier Apr 12 20:19:53.554654 systemd-networkd[1307]: enp1s0f0np0: Gained carrier Apr 12 20:19:53.595234 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Apr 12 20:19:53.595290 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Apr 12 20:19:53.596521 systemd-networkd[1307]: enp1s0f1np1: Link DOWN Apr 12 20:19:53.596524 systemd-networkd[1307]: enp1s0f1np1: Lost carrier Apr 12 20:19:53.598562 systemd[1]: Finished systemd-udev-settle.service. Apr 12 20:19:53.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:53.610019 systemd[1]: Starting lvm2-activation-early.service... Apr 12 20:19:53.642316 lvm[1384]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 20:19:53.667666 systemd[1]: Finished lvm2-activation-early.service. Apr 12 20:19:53.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:53.676395 systemd[1]: Reached target cryptsetup.target. Apr 12 20:19:53.685958 systemd[1]: Starting lvm2-activation.service... Apr 12 20:19:53.688219 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 20:19:53.723793 systemd[1]: Finished lvm2-activation.service. Apr 12 20:19:53.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:53.732317 systemd[1]: Reached target local-fs-pre.target. Apr 12 20:19:53.741322 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 20:19:53.741335 systemd[1]: Reached target local-fs.target. Apr 12 20:19:53.749285 systemd[1]: Reached target machines.target. Apr 12 20:19:53.757976 systemd[1]: Starting ldconfig.service... Apr 12 20:19:53.764664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 20:19:53.764687 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 20:19:53.765309 systemd[1]: Starting systemd-boot-update.service... Apr 12 20:19:53.773769 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 20:19:53.784851 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 20:19:53.785056 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 20:19:53.785081 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 20:19:53.785639 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 20:19:53.785835 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1387 (bootctl) Apr 12 20:19:53.786587 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 20:19:53.805658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 20:19:53.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:53.905093 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 20:19:53.979254 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 20:19:53.989014 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 20:19:53.993280 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 12 20:19:53.993860 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Apr 12 20:19:54.013242 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Apr 12 20:19:54.013972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 20:19:54.031239 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Apr 12 20:19:54.031256 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 20:19:54.032080 systemd-networkd[1307]: enp1s0f1np1: Link UP Apr 12 20:19:54.032083 systemd-networkd[1307]: enp1s0f1np1: Gained carrier Apr 12 20:19:54.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:54.049314 systemd-fsck[1395]: fsck.fat 4.2 (2021-01-31) Apr 12 20:19:54.049314 systemd-fsck[1395]: /dev/sda1: 789 files, 119240/258078 clusters Apr 12 20:19:54.066701 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 20:19:54.070237 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Apr 12 20:19:54.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:54.081673 systemd[1]: Mounting boot.mount... Apr 12 20:19:54.101016 systemd[1]: Mounted boot.mount. Apr 12 20:19:54.118960 systemd[1]: Finished systemd-boot-update.service. Apr 12 20:19:54.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:54.150437 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 20:19:54.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 20:19:54.159111 systemd[1]: Starting audit-rules.service... Apr 12 20:19:54.166860 systemd[1]: Starting clean-ca-certificates.service... Apr 12 20:19:54.175882 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 20:19:54.177000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 20:19:54.177000 audit[1415]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5b190d40 a2=420 a3=0 items=0 ppid=1398 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 20:19:54.177000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 20:19:54.178809 augenrules[1415]: No rules Apr 12 20:19:54.185266 systemd[1]: Starting systemd-resolved.service... Apr 12 20:19:54.193222 systemd[1]: Starting systemd-timesyncd.service... Apr 12 20:19:54.200837 systemd[1]: Starting systemd-update-utmp.service... Apr 12 20:19:54.207634 systemd[1]: Finished audit-rules.service. Apr 12 20:19:54.214514 systemd[1]: Finished clean-ca-certificates.service. Apr 12 20:19:54.222480 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 20:19:54.233580 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 20:19:54.234031 systemd[1]: Finished systemd-update-utmp.service. Apr 12 20:19:54.244585 ldconfig[1386]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 20:19:54.246954 systemd[1]: Finished ldconfig.service. Apr 12 20:19:54.254918 systemd[1]: Starting systemd-update-done.service... Apr 12 20:19:54.261411 systemd[1]: Started systemd-timesyncd.service. Apr 12 20:19:54.262291 systemd-resolved[1420]: Positive Trust Anchors: Apr 12 20:19:54.262297 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 20:19:54.262317 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 20:19:54.266136 systemd-resolved[1420]: Using system hostname 'ci-3510.3.3-a-3fbc403199'. Apr 12 20:19:54.269427 systemd[1]: Started systemd-resolved.service. Apr 12 20:19:54.277451 systemd[1]: Finished systemd-update-done.service. Apr 12 20:19:54.285381 systemd[1]: Reached target network.target. Apr 12 20:19:54.293339 systemd[1]: Reached target nss-lookup.target. Apr 12 20:19:54.301321 systemd[1]: Reached target sysinit.target. Apr 12 20:19:54.309354 systemd[1]: Started motdgen.path. Apr 12 20:19:54.316330 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 20:19:54.326312 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 20:19:54.334310 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 20:19:54.334327 systemd[1]: Reached target paths.target. Apr 12 20:19:54.341305 systemd[1]: Reached target time-set.target. Apr 12 20:19:54.349390 systemd[1]: Started logrotate.timer. Apr 12 20:19:54.356343 systemd[1]: Started mdadm.timer. Apr 12 20:19:54.363304 systemd[1]: Reached target timers.target. Apr 12 20:19:54.370424 systemd[1]: Listening on dbus.socket. Apr 12 20:19:54.377807 systemd[1]: Starting docker.socket... Apr 12 20:19:54.385816 systemd[1]: Listening on sshd.socket. Apr 12 20:19:54.392379 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 20:19:54.392596 systemd[1]: Listening on docker.socket. Apr 12 20:19:54.399308 systemd[1]: Reached target sockets.target. Apr 12 20:19:54.407262 systemd[1]: Reached target basic.target. Apr 12 20:19:54.414319 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 20:19:54.414332 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 20:19:54.414772 systemd[1]: Starting containerd.service... Apr 12 20:19:54.421671 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Apr 12 20:19:54.429770 systemd[1]: Starting coreos-metadata.service... Apr 12 20:19:54.436797 systemd[1]: Starting dbus.service... Apr 12 20:19:54.442751 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 20:19:54.447638 jq[1435]: false Apr 12 20:19:54.449894 systemd[1]: Starting extend-filesystems.service... Apr 12 20:19:54.452088 coreos-metadata[1428]: Apr 12 20:19:54.452 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 12 20:19:54.453226 dbus-daemon[1434]: [system] SELinux support is enabled Apr 12 20:19:54.456316 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 20:19:54.456969 systemd[1]: Starting motdgen.service... Apr 12 20:19:54.458005 extend-filesystems[1436]: Found sda Apr 12 20:19:54.475225 extend-filesystems[1436]: Found sda1 Apr 12 20:19:54.475225 extend-filesystems[1436]: Found sda2 Apr 12 20:19:54.475225 extend-filesystems[1436]: Found sda3 Apr 12 20:19:54.475225 extend-filesystems[1436]: Found usr Apr 12 20:19:54.475225 extend-filesystems[1436]: Found sda4 Apr 12 20:19:54.475225 extend-filesystems[1436]: Found sda6 Apr 12 20:19:54.475225 extend-filesystems[1436]: Found sda7 Apr 12 20:19:54.475225 extend-filesystems[1436]: Found sda9 Apr 12 20:19:54.475225 extend-filesystems[1436]: Checking size of /dev/sda9 Apr 12 20:19:54.475225 extend-filesystems[1436]: Resized partition /dev/sda9 Apr 12 20:19:54.574384 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Apr 12 20:19:54.574424 coreos-metadata[1431]: Apr 12 20:19:54.458 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 12 20:19:54.463898 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 20:19:54.574582 extend-filesystems[1452]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 20:19:54.499956 systemd[1]: Starting prepare-critools.service... Apr 12 20:19:54.506941 systemd[1]: Starting prepare-helm.service... Apr 12 20:19:54.529793 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 20:19:54.536838 systemd[1]: Starting sshd-keygen.service... Apr 12 20:19:54.560614 systemd[1]: Starting systemd-logind.service... Apr 12 20:19:54.581751 systemd-logind[1465]: Watching system buttons on /dev/input/event3 (Power Button) Apr 12 20:19:54.581761 systemd-logind[1465]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 12 20:19:54.581770 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Apr 12 20:19:54.581913 systemd-logind[1465]: New seat seat0. Apr 12 20:19:54.586276 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 20:19:54.586853 systemd[1]: Starting tcsd.service... Apr 12 20:19:54.599585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 20:19:54.599919 systemd[1]: Starting update-engine.service... Apr 12 20:19:54.606726 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 20:19:54.608288 jq[1468]: true Apr 12 20:19:54.614586 systemd[1]: Started dbus.service. Apr 12 20:19:54.622988 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 20:19:54.623077 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 20:19:54.623262 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 20:19:54.623352 systemd[1]: Finished motdgen.service. Apr 12 20:19:54.631400 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 20:19:54.631485 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 20:19:54.639102 tar[1470]: ./ Apr 12 20:19:54.639102 tar[1470]: ./loopback Apr 12 20:19:54.641983 jq[1476]: true Apr 12 20:19:54.642498 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 12 20:19:54.643454 tar[1471]: crictl Apr 12 20:19:54.644906 tar[1472]: linux-amd64/helm Apr 12 20:19:54.648605 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Apr 12 20:19:54.648738 systemd[1]: Condition check resulted in tcsd.service being skipped. Apr 12 20:19:54.648848 systemd[1]: Started systemd-logind.service. Apr 12 20:19:54.652208 update_engine[1467]: I0412 20:19:54.651262 1467 main.cc:92] Flatcar Update Engine starting Apr 12 20:19:54.652382 env[1477]: time="2024-04-12T20:19:54.652214531Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 20:19:54.655919 update_engine[1467]: I0412 20:19:54.655907 1467 update_check_scheduler.cc:74] Next update check in 6m24s Apr 12 20:19:54.663334 systemd[1]: Started update-engine.service. Apr 12 20:19:54.671288 env[1477]: time="2024-04-12T20:19:54.663904833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 20:19:54.672673 systemd[1]: Started locksmithd.service. Apr 12 20:19:54.673475 tar[1470]: ./bandwidth Apr 12 20:19:54.673510 env[1477]: time="2024-04-12T20:19:54.673488948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674115 env[1477]: time="2024-04-12T20:19:54.674096216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674144 env[1477]: time="2024-04-12T20:19:54.674116317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674261 env[1477]: time="2024-04-12T20:19:54.674247499Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674294 env[1477]: time="2024-04-12T20:19:54.674263171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674294 env[1477]: time="2024-04-12T20:19:54.674274699Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 20:19:54.674294 env[1477]: time="2024-04-12T20:19:54.674283935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674347 env[1477]: time="2024-04-12T20:19:54.674338914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674509 env[1477]: time="2024-04-12T20:19:54.674498052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674609 env[1477]: time="2024-04-12T20:19:54.674596885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 20:19:54.674632 env[1477]: time="2024-04-12T20:19:54.674611360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 20:19:54.674658 env[1477]: time="2024-04-12T20:19:54.674648484Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 20:19:54.674681 env[1477]: time="2024-04-12T20:19:54.674660303Z" level=info msg="metadata content store policy set" policy=shared Apr 12 20:19:54.680356 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 20:19:54.680453 systemd[1]: Reached target system-config.target. Apr 12 20:19:54.689316 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 20:19:54.689404 systemd[1]: Reached target user-config.target. Apr 12 20:19:54.713069 env[1477]: time="2024-04-12T20:19:54.713049787Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 20:19:54.713115 env[1477]: time="2024-04-12T20:19:54.713073364Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 20:19:54.713115 env[1477]: time="2024-04-12T20:19:54.713085114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 20:19:54.713115 env[1477]: time="2024-04-12T20:19:54.713103941Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.713115 env[1477]: time="2024-04-12T20:19:54.713112367Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723039 env[1477]: time="2024-04-12T20:19:54.713121330Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723039 env[1477]: time="2024-04-12T20:19:54.713128701Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723039 env[1477]: time="2024-04-12T20:19:54.713136794Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723039 env[1477]: time="2024-04-12T20:19:54.713144234Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723039 env[1477]: time="2024-04-12T20:19:54.713152399Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723039 env[1477]: time="2024-04-12T20:19:54.713159493Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723039 env[1477]: time="2024-04-12T20:19:54.713166230Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 20:19:54.723203 env[1477]: time="2024-04-12T20:19:54.723054141Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 20:19:54.723203 env[1477]: time="2024-04-12T20:19:54.723107622Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 20:19:54.723272 env[1477]: time="2024-04-12T20:19:54.723257224Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 20:19:54.723302 env[1477]: time="2024-04-12T20:19:54.723280436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723302 env[1477]: time="2024-04-12T20:19:54.723289395Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 20:19:54.723348 env[1477]: time="2024-04-12T20:19:54.723322402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723348 env[1477]: time="2024-04-12T20:19:54.723334097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723348 env[1477]: time="2024-04-12T20:19:54.723342159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723425 env[1477]: time="2024-04-12T20:19:54.723348790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723425 env[1477]: time="2024-04-12T20:19:54.723355729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723425 env[1477]: time="2024-04-12T20:19:54.723362472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723425 env[1477]: time="2024-04-12T20:19:54.723369372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723425 env[1477]: time="2024-04-12T20:19:54.723375663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723425 env[1477]: time="2024-04-12T20:19:54.723383953Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723452676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723462125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723470463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723477625Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723486702Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723493494Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723503217Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 20:19:54.723568 env[1477]: time="2024-04-12T20:19:54.723527461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 20:19:54.723877 env[1477]: time="2024-04-12T20:19:54.723832925Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.723885708Z" level=info msg="Connect containerd service" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.723913968Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724266568Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724359324Z" level=info msg="Start subscribing containerd event" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724399242Z" level=info msg="Start recovering state" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724413141Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724436317Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724444556Z" level=info msg="Start event monitor" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724454491Z" level=info msg="Start snapshots syncer" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724462601Z" level=info msg="Start cni network conf syncer for default" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724465969Z" level=info msg="containerd successfully booted in 0.072579s" Apr 12 20:19:54.727645 env[1477]: time="2024-04-12T20:19:54.724474513Z" level=info msg="Start streaming server" Apr 12 20:19:54.727953 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Apr 12 20:19:54.724538 systemd[1]: Started containerd.service. Apr 12 20:19:54.729337 tar[1470]: ./ptp Apr 12 20:19:54.731563 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 20:19:54.745485 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 20:19:54.752030 tar[1470]: ./vlan Apr 12 20:19:54.773954 tar[1470]: ./host-device Apr 12 20:19:54.795199 tar[1470]: ./tuning Apr 12 20:19:54.814012 tar[1470]: ./vrf Apr 12 20:19:54.833671 tar[1470]: ./sbr Apr 12 20:19:54.852910 tar[1470]: ./tap Apr 12 20:19:54.874941 tar[1470]: ./dhcp Apr 12 20:19:54.920214 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 20:19:54.930988 tar[1470]: ./static Apr 12 20:19:54.932481 systemd[1]: Finished sshd-keygen.service. Apr 12 20:19:54.937924 tar[1472]: linux-amd64/LICENSE Apr 12 20:19:54.937956 tar[1472]: linux-amd64/README.md Apr 12 20:19:54.941669 systemd[1]: Starting issuegen.service... Apr 12 20:19:54.946956 tar[1470]: ./firewall Apr 12 20:19:54.950044 systemd[1]: Finished prepare-helm.service. Apr 12 20:19:54.959519 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 20:19:54.959620 systemd[1]: Finished issuegen.service. Apr 12 20:19:54.967459 systemd[1]: Finished prepare-critools.service. Apr 12 20:19:54.971372 tar[1470]: ./macvlan Apr 12 20:19:54.977103 systemd[1]: Starting systemd-user-sessions.service... Apr 12 20:19:54.986458 systemd[1]: Finished systemd-user-sessions.service. Apr 12 20:19:54.993351 tar[1470]: ./dummy Apr 12 20:19:54.995081 systemd[1]: Started getty@tty1.service. Apr 12 20:19:55.002988 systemd[1]: Started serial-getty@ttyS1.service. Apr 12 20:19:55.012433 systemd[1]: Reached target getty.target. Apr 12 20:19:55.015049 tar[1470]: ./bridge Apr 12 20:19:55.038848 tar[1470]: ./ipvlan Apr 12 20:19:55.060896 tar[1470]: ./portmap Apr 12 20:19:55.081633 tar[1470]: ./host-local Apr 12 20:19:55.105751 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 20:19:55.353352 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Apr 12 20:19:55.366424 systemd-networkd[1307]: bond0: Gained IPv6LL Apr 12 20:19:55.439242 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:1 Apr 12 20:19:56.337285 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Apr 12 20:19:56.337448 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Apr 12 20:19:56.384122 extend-filesystems[1452]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 12 20:19:56.384122 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 56 Apr 12 20:19:56.384122 extend-filesystems[1452]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Apr 12 20:19:56.422494 extend-filesystems[1436]: Resized filesystem in /dev/sda9 Apr 12 20:19:56.422494 extend-filesystems[1436]: Found sdb Apr 12 20:19:56.384656 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 20:19:56.384741 systemd[1]: Finished extend-filesystems.service. Apr 12 20:20:00.022943 login[1536]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 20:20:00.029738 login[1535]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 20:20:00.030565 systemd-logind[1465]: New session 1 of user core. Apr 12 20:20:00.031103 systemd[1]: Created slice user-500.slice. Apr 12 20:20:00.031658 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 20:20:00.032837 systemd-logind[1465]: New session 2 of user core. Apr 12 20:20:00.036570 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 20:20:00.037290 systemd[1]: Starting user@500.service... Apr 12 20:20:00.039239 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:20:00.104768 systemd[1544]: Queued start job for default target default.target. Apr 12 20:20:00.105004 systemd[1544]: Reached target paths.target. Apr 12 20:20:00.105016 systemd[1544]: Reached target sockets.target. Apr 12 20:20:00.105024 systemd[1544]: Reached target timers.target. Apr 12 20:20:00.105031 systemd[1544]: Reached target basic.target. Apr 12 20:20:00.105050 systemd[1544]: Reached target default.target. Apr 12 20:20:00.105066 systemd[1544]: Startup finished in 62ms. Apr 12 20:20:00.105111 systemd[1]: Started user@500.service. Apr 12 20:20:00.105725 systemd[1]: Started session-1.scope. Apr 12 20:20:00.106060 systemd[1]: Started session-2.scope. Apr 12 20:20:00.423660 coreos-metadata[1428]: Apr 12 20:20:00.423 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Apr 12 20:20:00.424463 coreos-metadata[1431]: Apr 12 20:20:00.423 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Apr 12 20:20:01.423865 coreos-metadata[1428]: Apr 12 20:20:01.423 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 12 20:20:01.424650 coreos-metadata[1431]: Apr 12 20:20:01.423 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 12 20:20:01.532345 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Apr 12 20:20:01.532497 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Apr 12 20:20:02.479750 systemd[1]: Created slice system-sshd.slice. Apr 12 20:20:02.480417 systemd[1]: Started sshd@0-139.178.89.23:22-147.75.109.163:41234.service. Apr 12 20:20:02.495061 coreos-metadata[1428]: Apr 12 20:20:02.495 INFO Fetch successful Apr 12 20:20:02.496175 coreos-metadata[1431]: Apr 12 20:20:02.496 INFO Fetch successful Apr 12 20:20:02.520023 unknown[1428]: wrote ssh authorized keys file for user: core Apr 12 20:20:02.520339 systemd[1]: Finished coreos-metadata.service. Apr 12 20:20:02.521032 sshd[1565]: Accepted publickey for core from 147.75.109.163 port 41234 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:20:02.521431 systemd[1]: Started packet-phone-home.service. Apr 12 20:20:02.521828 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:20:02.524491 systemd-logind[1465]: New session 3 of user core. Apr 12 20:20:02.525028 systemd[1]: Started session-3.scope. Apr 12 20:20:02.526462 curl[1569]: % Total % Received % Xferd Average Speed Time Time Time Current Apr 12 20:20:02.526653 curl[1569]: Dload Upload Total Spent Left Speed Apr 12 20:20:02.531984 update-ssh-keys[1570]: Updated "/home/core/.ssh/authorized_keys" Apr 12 20:20:02.532193 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Apr 12 20:20:02.532524 systemd[1]: Reached target multi-user.target. Apr 12 20:20:02.533139 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 20:20:02.537123 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 20:20:02.537193 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 20:20:02.537379 systemd[1]: Startup finished in 1.851s (kernel) + 53.928s (initrd) + 14.778s (userspace) = 1min 10.559s. Apr 12 20:20:02.572558 systemd[1]: Started sshd@1-139.178.89.23:22-147.75.109.163:41248.service. Apr 12 20:20:02.601071 sshd[1575]: Accepted publickey for core from 147.75.109.163 port 41248 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:20:02.601756 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:20:02.604112 systemd-logind[1465]: New session 4 of user core. Apr 12 20:20:02.604557 systemd[1]: Started session-4.scope. Apr 12 20:20:02.656929 sshd[1575]: pam_unix(sshd:session): session closed for user core Apr 12 20:20:02.660321 systemd[1]: sshd@1-139.178.89.23:22-147.75.109.163:41248.service: Deactivated successfully. Apr 12 20:20:02.661167 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 20:20:02.662077 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. Apr 12 20:20:02.663627 systemd[1]: Started sshd@2-139.178.89.23:22-147.75.109.163:41264.service. Apr 12 20:20:02.665109 systemd-logind[1465]: Removed session 4. Apr 12 20:20:02.696173 sshd[1581]: Accepted publickey for core from 147.75.109.163 port 41264 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:20:02.696928 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:20:02.699721 systemd-logind[1465]: New session 5 of user core. Apr 12 20:20:02.700229 systemd[1]: Started session-5.scope. Apr 12 20:20:02.727294 curl[1569]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Apr 12 20:20:02.728014 systemd[1]: packet-phone-home.service: Deactivated successfully. Apr 12 20:20:02.749759 sshd[1581]: pam_unix(sshd:session): session closed for user core Apr 12 20:20:02.754222 systemd[1]: sshd@2-139.178.89.23:22-147.75.109.163:41264.service: Deactivated successfully. Apr 12 20:20:02.755475 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 20:20:02.756815 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. Apr 12 20:20:02.758966 systemd[1]: Started sshd@3-139.178.89.23:22-147.75.109.163:41270.service. Apr 12 20:20:02.761282 systemd-logind[1465]: Removed session 5. Apr 12 20:20:02.812543 sshd[1587]: Accepted publickey for core from 147.75.109.163 port 41270 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:20:02.813204 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:20:02.815555 systemd-logind[1465]: New session 6 of user core. Apr 12 20:20:02.815952 systemd[1]: Started session-6.scope. Apr 12 20:20:02.867684 sshd[1587]: pam_unix(sshd:session): session closed for user core Apr 12 20:20:02.870490 systemd[1]: sshd@3-139.178.89.23:22-147.75.109.163:41270.service: Deactivated successfully. Apr 12 20:20:02.871141 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 20:20:02.871874 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. Apr 12 20:20:02.871981 systemd-timesyncd[1421]: Contacted time server 50.218.103.254:123 (0.flatcar.pool.ntp.org). Apr 12 20:20:02.872040 systemd-timesyncd[1421]: Initial clock synchronization to Fri 2024-04-12 20:20:02.882529 UTC. Apr 12 20:20:02.872959 systemd[1]: Started sshd@4-139.178.89.23:22-147.75.109.163:41276.service. Apr 12 20:20:02.874085 systemd-logind[1465]: Removed session 6. Apr 12 20:20:02.905174 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 41276 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:20:02.905956 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:20:02.908732 systemd-logind[1465]: New session 7 of user core. Apr 12 20:20:02.909214 systemd[1]: Started session-7.scope. Apr 12 20:20:02.985964 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 20:20:02.986597 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 20:20:03.540639 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 20:20:03.544488 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 20:20:03.544650 systemd[1]: Reached target network-online.target. Apr 12 20:20:03.545287 systemd[1]: Starting docker.service... Apr 12 20:20:03.563003 env[1616]: time="2024-04-12T20:20:03.562944574Z" level=info msg="Starting up" Apr 12 20:20:03.563631 env[1616]: time="2024-04-12T20:20:03.563589731Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 20:20:03.563631 env[1616]: time="2024-04-12T20:20:03.563600366Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 20:20:03.563631 env[1616]: time="2024-04-12T20:20:03.563615630Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 20:20:03.563631 env[1616]: time="2024-04-12T20:20:03.563623262Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 20:20:03.564450 env[1616]: time="2024-04-12T20:20:03.564438975Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 20:20:03.564450 env[1616]: time="2024-04-12T20:20:03.564448753Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 20:20:03.564503 env[1616]: time="2024-04-12T20:20:03.564457829Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 20:20:03.564503 env[1616]: time="2024-04-12T20:20:03.564463816Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 20:20:03.712251 env[1616]: time="2024-04-12T20:20:03.712188224Z" level=info msg="Loading containers: start." Apr 12 20:20:03.856293 kernel: Initializing XFRM netlink socket Apr 12 20:20:03.878286 env[1616]: time="2024-04-12T20:20:03.878224326Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 20:20:03.981200 systemd-networkd[1307]: docker0: Link UP Apr 12 20:20:04.000762 env[1616]: time="2024-04-12T20:20:04.000651346Z" level=info msg="Loading containers: done." Apr 12 20:20:04.019449 env[1616]: time="2024-04-12T20:20:04.019355310Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 20:20:04.019789 env[1616]: time="2024-04-12T20:20:04.019733572Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 20:20:04.020028 env[1616]: time="2024-04-12T20:20:04.019959838Z" level=info msg="Daemon has completed initialization" Apr 12 20:20:04.021119 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1418420399-merged.mount: Deactivated successfully. Apr 12 20:20:04.027198 systemd[1]: Started docker.service. Apr 12 20:20:04.030553 env[1616]: time="2024-04-12T20:20:04.030496768Z" level=info msg="API listen on /run/docker.sock" Apr 12 20:20:04.040615 systemd[1]: Reloading. Apr 12 20:20:04.074623 /usr/lib/systemd/system-generators/torcx-generator[1769]: time="2024-04-12T20:20:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 20:20:04.074652 /usr/lib/systemd/system-generators/torcx-generator[1769]: time="2024-04-12T20:20:04Z" level=info msg="torcx already run" Apr 12 20:20:04.136790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 20:20:04.136800 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 20:20:04.151272 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 20:20:04.207795 systemd[1]: Started kubelet.service. Apr 12 20:20:04.230984 kubelet[1828]: E0412 20:20:04.230926 1828 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 20:20:04.232345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 20:20:04.232416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 20:20:04.907619 env[1477]: time="2024-04-12T20:20:04.907567478Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.8\"" Apr 12 20:20:05.569198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860233064.mount: Deactivated successfully. Apr 12 20:20:07.446040 env[1477]: time="2024-04-12T20:20:07.445988361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:07.446744 env[1477]: time="2024-04-12T20:20:07.446685129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e70a71eaa5605454dd0adfd46911b0203db5baf1107de51ba9943d2eaea23142,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:07.447661 env[1477]: time="2024-04-12T20:20:07.447625796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:07.448732 env[1477]: time="2024-04-12T20:20:07.448677561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:7e7f3c806333528451a1e0bfdf17da0341adaea7d50a703db9c2005c474a97b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:07.449239 env[1477]: time="2024-04-12T20:20:07.449200926Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.8\" returns image reference \"sha256:e70a71eaa5605454dd0adfd46911b0203db5baf1107de51ba9943d2eaea23142\"" Apr 12 20:20:07.454971 env[1477]: time="2024-04-12T20:20:07.454917727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.8\"" Apr 12 20:20:09.739632 env[1477]: time="2024-04-12T20:20:09.739565551Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:09.740141 env[1477]: time="2024-04-12T20:20:09.740129594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e5ae3e4dc6566b175cc53982cae28703dcd88916c37b4d2c0cb688faf8e05fad,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:09.741175 env[1477]: time="2024-04-12T20:20:09.741162310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:09.742094 env[1477]: time="2024-04-12T20:20:09.742083533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:f3d0e8da9d1532e081e719a985e89a0cfe1a29d127773ad8e2c2fee1dd10fd00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:09.742588 env[1477]: time="2024-04-12T20:20:09.742548187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.8\" returns image reference \"sha256:e5ae3e4dc6566b175cc53982cae28703dcd88916c37b4d2c0cb688faf8e05fad\"" Apr 12 20:20:09.752099 env[1477]: time="2024-04-12T20:20:09.752063437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.8\"" Apr 12 20:20:11.397398 env[1477]: time="2024-04-12T20:20:11.397347733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:11.398020 env[1477]: time="2024-04-12T20:20:11.397977961Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ad3260645145d9611fcf5e5936ddf7cf5be8990fe44160c960c2f3cc643fb4e4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:11.399096 env[1477]: time="2024-04-12T20:20:11.399056418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:11.400109 env[1477]: time="2024-04-12T20:20:11.400055984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:4d61604f259d3c91d8b3ec7a6a999f5eae9aff371567151cd5165eaa698c6d7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:11.400743 env[1477]: time="2024-04-12T20:20:11.400684946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.8\" returns image reference \"sha256:ad3260645145d9611fcf5e5936ddf7cf5be8990fe44160c960c2f3cc643fb4e4\"" Apr 12 20:20:11.406882 env[1477]: time="2024-04-12T20:20:11.406850647Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.8\"" Apr 12 20:20:12.459842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853148718.mount: Deactivated successfully. Apr 12 20:20:12.818722 env[1477]: time="2024-04-12T20:20:12.818671654Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:12.819469 env[1477]: time="2024-04-12T20:20:12.819405317Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5ce97277076c6f5c87d43fec5e3eacad030c82c81b2756d2bba4569d22fc65dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:12.820493 env[1477]: time="2024-04-12T20:20:12.820459369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:12.821212 env[1477]: time="2024-04-12T20:20:12.821178853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9e9dd46799712c58e1a49f973374ffa9ad4e5a6175896e5d805a8738bf5c5865,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:12.821512 env[1477]: time="2024-04-12T20:20:12.821477036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.8\" returns image reference \"sha256:5ce97277076c6f5c87d43fec5e3eacad030c82c81b2756d2bba4569d22fc65dc\"" Apr 12 20:20:12.828106 env[1477]: time="2024-04-12T20:20:12.828072871Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 20:20:13.395401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242459682.mount: Deactivated successfully. Apr 12 20:20:13.396530 env[1477]: time="2024-04-12T20:20:13.396481839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:13.397459 env[1477]: time="2024-04-12T20:20:13.397414764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:13.398097 env[1477]: time="2024-04-12T20:20:13.398080704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:13.398983 env[1477]: time="2024-04-12T20:20:13.398947058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:13.399309 env[1477]: time="2024-04-12T20:20:13.399278391Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 20:20:13.406143 env[1477]: time="2024-04-12T20:20:13.406123588Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Apr 12 20:20:14.019156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827866575.mount: Deactivated successfully. Apr 12 20:20:14.284879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 20:20:14.285043 systemd[1]: Stopped kubelet.service. Apr 12 20:20:14.286144 systemd[1]: Started kubelet.service. Apr 12 20:20:14.309529 kubelet[1919]: E0412 20:20:14.309505 1919 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 20:20:14.311558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 20:20:14.311633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 20:20:17.516909 env[1477]: time="2024-04-12T20:20:17.516793913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:17.519084 env[1477]: time="2024-04-12T20:20:17.519005407Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:17.523510 env[1477]: time="2024-04-12T20:20:17.523450057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:17.527839 env[1477]: time="2024-04-12T20:20:17.527772315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:17.529917 env[1477]: time="2024-04-12T20:20:17.529828395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Apr 12 20:20:17.548041 env[1477]: time="2024-04-12T20:20:17.547948080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 20:20:18.097699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536171691.mount: Deactivated successfully. Apr 12 20:20:18.555880 env[1477]: time="2024-04-12T20:20:18.555799312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:18.556761 env[1477]: time="2024-04-12T20:20:18.556707568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:18.558392 env[1477]: time="2024-04-12T20:20:18.558335412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:18.559983 env[1477]: time="2024-04-12T20:20:18.559923017Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:18.560749 env[1477]: time="2024-04-12T20:20:18.560687667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Apr 12 20:20:20.110709 systemd[1]: Stopped kubelet.service. Apr 12 20:20:20.119510 systemd[1]: Reloading. Apr 12 20:20:20.152213 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2024-04-12T20:20:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 20:20:20.152230 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2024-04-12T20:20:20Z" level=info msg="torcx already run" Apr 12 20:20:20.225188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 20:20:20.225200 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 20:20:20.240817 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 20:20:20.300142 systemd[1]: Started kubelet.service. Apr 12 20:20:20.322176 kubelet[2140]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 20:20:20.322176 kubelet[2140]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 20:20:20.322176 kubelet[2140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 20:20:20.322404 kubelet[2140]: I0412 20:20:20.322176 2140 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 20:20:20.457674 kubelet[2140]: I0412 20:20:20.457614 2140 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Apr 12 20:20:20.457674 kubelet[2140]: I0412 20:20:20.457626 2140 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 20:20:20.457734 kubelet[2140]: I0412 20:20:20.457721 2140 server.go:895] "Client rotation is on, will bootstrap in background" Apr 12 20:20:20.459654 kubelet[2140]: I0412 20:20:20.459640 2140 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 20:20:20.460324 kubelet[2140]: E0412 20:20:20.460284 2140 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.89.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.487555 kubelet[2140]: I0412 20:20:20.487521 2140 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 20:20:20.487864 kubelet[2140]: I0412 20:20:20.487839 2140 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 20:20:20.488139 kubelet[2140]: I0412 20:20:20.488111 2140 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 20:20:20.488396 kubelet[2140]: I0412 20:20:20.488158 2140 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 20:20:20.488396 kubelet[2140]: I0412 20:20:20.488183 2140 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 20:20:20.488396 kubelet[2140]: I0412 20:20:20.488392 2140 state_mem.go:36] "Initialized new in-memory state store" Apr 12 20:20:20.488676 kubelet[2140]: I0412 20:20:20.488539 2140 kubelet.go:393] "Attempting to sync node with API server" Apr 12 20:20:20.488676 kubelet[2140]: I0412 20:20:20.488572 2140 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 20:20:20.488676 kubelet[2140]: I0412 20:20:20.488618 2140 kubelet.go:309] "Adding apiserver pod source" Apr 12 20:20:20.488676 kubelet[2140]: I0412 20:20:20.488658 2140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 20:20:20.489339 kubelet[2140]: W0412 20:20:20.489277 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.89.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-3fbc403199&limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.489458 kubelet[2140]: E0412 20:20:20.489376 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.89.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-3fbc403199&limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.489458 kubelet[2140]: W0412 20:20:20.489342 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.89.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.489458 kubelet[2140]: E0412 20:20:20.489433 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.89.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.489458 kubelet[2140]: I0412 20:20:20.489444 2140 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 20:20:20.489807 kubelet[2140]: W0412 20:20:20.489761 2140 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 20:20:20.490475 kubelet[2140]: I0412 20:20:20.490438 2140 server.go:1232] "Started kubelet" Apr 12 20:20:20.490616 kubelet[2140]: I0412 20:20:20.490546 2140 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 20:20:20.490723 kubelet[2140]: I0412 20:20:20.490643 2140 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 20:20:20.491059 kubelet[2140]: I0412 20:20:20.491029 2140 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 20:20:20.491059 kubelet[2140]: E0412 20:20:20.490930 2140 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.3-a-3fbc403199.17c5a1e0e0314098", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.3-a-3fbc403199", UID:"ci-3510.3.3-a-3fbc403199", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.3-a-3fbc403199"}, FirstTimestamp:time.Date(2024, time.April, 12, 20, 20, 20, 490412184, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 20, 20, 20, 490412184, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.3-a-3fbc403199"}': 'Post "https://139.178.89.23:6443/api/v1/namespaces/default/events": dial tcp 139.178.89.23:6443: connect: connection refused'(may retry after sleeping) Apr 12 20:20:20.491306 kubelet[2140]: E0412 20:20:20.491058 2140 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 20:20:20.491306 kubelet[2140]: E0412 20:20:20.491104 2140 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 20:20:20.492206 kubelet[2140]: I0412 20:20:20.492148 2140 server.go:462] "Adding debug handlers to kubelet server" Apr 12 20:20:20.500970 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 20:20:20.501023 kubelet[2140]: I0412 20:20:20.500994 2140 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 20:20:20.501080 kubelet[2140]: E0412 20:20:20.501065 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:20.501080 kubelet[2140]: I0412 20:20:20.501078 2140 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 20:20:20.501149 kubelet[2140]: I0412 20:20:20.501069 2140 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 20:20:20.501149 kubelet[2140]: I0412 20:20:20.501118 2140 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 20:20:20.501362 kubelet[2140]: W0412 20:20:20.501327 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.501362 kubelet[2140]: E0412 20:20:20.501356 2140 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.89.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-3fbc403199?timeout=10s\": dial tcp 139.178.89.23:6443: connect: connection refused" interval="200ms" Apr 12 20:20:20.501419 kubelet[2140]: E0412 20:20:20.501371 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.509734 kubelet[2140]: I0412 20:20:20.509717 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 20:20:20.510216 kubelet[2140]: I0412 20:20:20.510207 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 20:20:20.510270 kubelet[2140]: I0412 20:20:20.510227 2140 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 20:20:20.510270 kubelet[2140]: I0412 20:20:20.510255 2140 kubelet.go:2303] "Starting kubelet main sync loop" Apr 12 20:20:20.510321 kubelet[2140]: E0412 20:20:20.510282 2140 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 20:20:20.510515 kubelet[2140]: W0412 20:20:20.510504 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.510539 kubelet[2140]: E0412 20:20:20.510523 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:20.515959 kubelet[2140]: I0412 20:20:20.515942 2140 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 20:20:20.515959 kubelet[2140]: I0412 20:20:20.515950 2140 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 20:20:20.515959 kubelet[2140]: I0412 20:20:20.515958 2140 state_mem.go:36] "Initialized new in-memory state store" Apr 12 20:20:20.516803 kubelet[2140]: I0412 20:20:20.516767 2140 policy_none.go:49] "None policy: Start" Apr 12 20:20:20.516991 kubelet[2140]: I0412 20:20:20.516968 2140 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 20:20:20.516991 kubelet[2140]: I0412 20:20:20.516980 2140 state_mem.go:35] "Initializing new in-memory state store" Apr 12 20:20:20.519245 systemd[1]: Created slice kubepods.slice. Apr 12 20:20:20.521267 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 20:20:20.522580 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 20:20:20.549866 kubelet[2140]: I0412 20:20:20.549819 2140 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 20:20:20.549998 kubelet[2140]: I0412 20:20:20.549957 2140 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 20:20:20.550267 kubelet[2140]: E0412 20:20:20.550258 2140 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:20.605902 kubelet[2140]: I0412 20:20:20.605844 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.606651 kubelet[2140]: E0412 20:20:20.606575 2140 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.89.23:6443/api/v1/nodes\": dial tcp 139.178.89.23:6443: connect: connection refused" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.610756 kubelet[2140]: I0412 20:20:20.610707 2140 topology_manager.go:215] "Topology Admit Handler" podUID="c2bedc281145207ea9119f88e3d7e69e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.614175 kubelet[2140]: I0412 20:20:20.614132 2140 topology_manager.go:215] "Topology Admit Handler" podUID="817f6d4e01418d8a61c32bdea5aeea7b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.617743 kubelet[2140]: I0412 20:20:20.617694 2140 topology_manager.go:215] "Topology Admit Handler" podUID="c82f4a5b84123c50a950b9ec06b53eb2" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.631122 systemd[1]: Created slice kubepods-burstable-podc2bedc281145207ea9119f88e3d7e69e.slice. Apr 12 20:20:20.657411 systemd[1]: Created slice kubepods-burstable-pod817f6d4e01418d8a61c32bdea5aeea7b.slice. Apr 12 20:20:20.679837 systemd[1]: Created slice kubepods-burstable-podc82f4a5b84123c50a950b9ec06b53eb2.slice. Apr 12 20:20:20.702466 kubelet[2140]: E0412 20:20:20.702403 2140 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.89.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-3fbc403199?timeout=10s\": dial tcp 139.178.89.23:6443: connect: connection refused" interval="400ms" Apr 12 20:20:20.702466 kubelet[2140]: I0412 20:20:20.702442 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2bedc281145207ea9119f88e3d7e69e-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-3fbc403199\" (UID: \"c2bedc281145207ea9119f88e3d7e69e\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.702838 kubelet[2140]: I0412 20:20:20.702563 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2bedc281145207ea9119f88e3d7e69e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-3fbc403199\" (UID: \"c2bedc281145207ea9119f88e3d7e69e\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.702838 kubelet[2140]: I0412 20:20:20.702631 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2bedc281145207ea9119f88e3d7e69e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-3fbc403199\" (UID: \"c2bedc281145207ea9119f88e3d7e69e\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.702838 kubelet[2140]: I0412 20:20:20.702692 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.702838 kubelet[2140]: I0412 20:20:20.702794 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.703216 kubelet[2140]: I0412 20:20:20.702869 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.703216 kubelet[2140]: I0412 20:20:20.702997 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.703216 kubelet[2140]: I0412 20:20:20.703093 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.703216 kubelet[2140]: I0412 20:20:20.703150 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c82f4a5b84123c50a950b9ec06b53eb2-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-3fbc403199\" (UID: \"c82f4a5b84123c50a950b9ec06b53eb2\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.810994 kubelet[2140]: I0412 20:20:20.810935 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.811675 kubelet[2140]: E0412 20:20:20.811597 2140 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.89.23:6443/api/v1/nodes\": dial tcp 139.178.89.23:6443: connect: connection refused" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:20.955112 env[1477]: time="2024-04-12T20:20:20.954994126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-3fbc403199,Uid:c2bedc281145207ea9119f88e3d7e69e,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:20.975829 env[1477]: time="2024-04-12T20:20:20.975710731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-3fbc403199,Uid:817f6d4e01418d8a61c32bdea5aeea7b,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:20.988590 env[1477]: time="2024-04-12T20:20:20.988488404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-3fbc403199,Uid:c82f4a5b84123c50a950b9ec06b53eb2,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:21.103856 kubelet[2140]: E0412 20:20:21.103785 2140 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.89.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-3fbc403199?timeout=10s\": dial tcp 139.178.89.23:6443: connect: connection refused" interval="800ms" Apr 12 20:20:21.216517 kubelet[2140]: I0412 20:20:21.216460 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:21.217140 kubelet[2140]: E0412 20:20:21.217064 2140 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.89.23:6443/api/v1/nodes\": dial tcp 139.178.89.23:6443: connect: connection refused" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:21.299164 kubelet[2140]: W0412 20:20:21.299008 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.89.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-3fbc403199&limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:21.299164 kubelet[2140]: E0412 20:20:21.299144 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.89.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-3fbc403199&limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:21.334769 kubelet[2140]: W0412 20:20:21.334617 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:21.334769 kubelet[2140]: E0412 20:20:21.334743 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.89.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:21.381413 kubelet[2140]: W0412 20:20:21.381189 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:21.381413 kubelet[2140]: E0412 20:20:21.381296 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.89.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:21.403591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797684838.mount: Deactivated successfully. Apr 12 20:20:21.404816 env[1477]: time="2024-04-12T20:20:21.404767458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.405859 env[1477]: time="2024-04-12T20:20:21.405815481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.406417 env[1477]: time="2024-04-12T20:20:21.406382263Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.407054 env[1477]: time="2024-04-12T20:20:21.407015318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.407450 env[1477]: time="2024-04-12T20:20:21.407416652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.408727 env[1477]: time="2024-04-12T20:20:21.408678066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.409110 env[1477]: time="2024-04-12T20:20:21.409073425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.410583 env[1477]: time="2024-04-12T20:20:21.410548515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.411856 env[1477]: time="2024-04-12T20:20:21.411810811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.412280 env[1477]: time="2024-04-12T20:20:21.412219009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.413203 env[1477]: time="2024-04-12T20:20:21.413168596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.413675 env[1477]: time="2024-04-12T20:20:21.413634604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:21.418512 env[1477]: time="2024-04-12T20:20:21.418449553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:21.418512 env[1477]: time="2024-04-12T20:20:21.418470206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:21.418512 env[1477]: time="2024-04-12T20:20:21.418477112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:21.418620 env[1477]: time="2024-04-12T20:20:21.418542207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/516db25ba01664f0f449fa822dd0f3f887142901799fc4e7a9dc611833201724 pid=2190 runtime=io.containerd.runc.v2 Apr 12 20:20:21.419975 env[1477]: time="2024-04-12T20:20:21.419927923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:21.419975 env[1477]: time="2024-04-12T20:20:21.419965251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:21.419975 env[1477]: time="2024-04-12T20:20:21.419972145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:21.420077 env[1477]: time="2024-04-12T20:20:21.420039724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbffad79c79e9aa1ff3db3aaa719d12003f526bbdf53e29571c814aa882e2923 pid=2206 runtime=io.containerd.runc.v2 Apr 12 20:20:21.421304 env[1477]: time="2024-04-12T20:20:21.421250303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:21.421304 env[1477]: time="2024-04-12T20:20:21.421269178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:21.421304 env[1477]: time="2024-04-12T20:20:21.421278058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:21.421421 env[1477]: time="2024-04-12T20:20:21.421347400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/489f6ac5b776443fbec69c738242512c4b58443c71d1442d887c9579c8ff81ce pid=2228 runtime=io.containerd.runc.v2 Apr 12 20:20:21.424642 systemd[1]: Started cri-containerd-516db25ba01664f0f449fa822dd0f3f887142901799fc4e7a9dc611833201724.scope. Apr 12 20:20:21.426368 systemd[1]: Started cri-containerd-bbffad79c79e9aa1ff3db3aaa719d12003f526bbdf53e29571c814aa882e2923.scope. Apr 12 20:20:21.427905 systemd[1]: Started cri-containerd-489f6ac5b776443fbec69c738242512c4b58443c71d1442d887c9579c8ff81ce.scope. Apr 12 20:20:21.448849 env[1477]: time="2024-04-12T20:20:21.448819908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-3fbc403199,Uid:c82f4a5b84123c50a950b9ec06b53eb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbffad79c79e9aa1ff3db3aaa719d12003f526bbdf53e29571c814aa882e2923\"" Apr 12 20:20:21.448849 env[1477]: time="2024-04-12T20:20:21.448826899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-3fbc403199,Uid:c2bedc281145207ea9119f88e3d7e69e,Namespace:kube-system,Attempt:0,} returns sandbox id \"516db25ba01664f0f449fa822dd0f3f887142901799fc4e7a9dc611833201724\"" Apr 12 20:20:21.450198 env[1477]: time="2024-04-12T20:20:21.450175558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-3fbc403199,Uid:817f6d4e01418d8a61c32bdea5aeea7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"489f6ac5b776443fbec69c738242512c4b58443c71d1442d887c9579c8ff81ce\"" Apr 12 20:20:21.451318 env[1477]: time="2024-04-12T20:20:21.451301217Z" level=info msg="CreateContainer within sandbox \"bbffad79c79e9aa1ff3db3aaa719d12003f526bbdf53e29571c814aa882e2923\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 20:20:21.451372 env[1477]: time="2024-04-12T20:20:21.451346620Z" level=info msg="CreateContainer within sandbox \"516db25ba01664f0f449fa822dd0f3f887142901799fc4e7a9dc611833201724\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 20:20:21.452008 env[1477]: time="2024-04-12T20:20:21.451992720Z" level=info msg="CreateContainer within sandbox \"489f6ac5b776443fbec69c738242512c4b58443c71d1442d887c9579c8ff81ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 20:20:21.470223 env[1477]: time="2024-04-12T20:20:21.470178840Z" level=info msg="CreateContainer within sandbox \"516db25ba01664f0f449fa822dd0f3f887142901799fc4e7a9dc611833201724\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c3f2de5fce2b528a3fea3d3d158e13703c07579405cb3f78abb308d2c1f7ae77\"" Apr 12 20:20:21.470457 env[1477]: time="2024-04-12T20:20:21.470408157Z" level=info msg="StartContainer for \"c3f2de5fce2b528a3fea3d3d158e13703c07579405cb3f78abb308d2c1f7ae77\"" Apr 12 20:20:21.471490 env[1477]: time="2024-04-12T20:20:21.471445826Z" level=info msg="CreateContainer within sandbox \"489f6ac5b776443fbec69c738242512c4b58443c71d1442d887c9579c8ff81ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02d371e1e1d590b2718a5707900ff0d566de2ab7f6f79906fed813361ff04f7b\"" Apr 12 20:20:21.471644 env[1477]: time="2024-04-12T20:20:21.471618329Z" level=info msg="CreateContainer within sandbox \"bbffad79c79e9aa1ff3db3aaa719d12003f526bbdf53e29571c814aa882e2923\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9a4bb811f7abfd1aa6bd23ee53dbf12dff0e87692a683f68e111abb41f087fa4\"" Apr 12 20:20:21.471644 env[1477]: time="2024-04-12T20:20:21.471640674Z" level=info msg="StartContainer for \"02d371e1e1d590b2718a5707900ff0d566de2ab7f6f79906fed813361ff04f7b\"" Apr 12 20:20:21.471868 env[1477]: time="2024-04-12T20:20:21.471852531Z" level=info msg="StartContainer for \"9a4bb811f7abfd1aa6bd23ee53dbf12dff0e87692a683f68e111abb41f087fa4\"" Apr 12 20:20:21.479307 systemd[1]: Started cri-containerd-02d371e1e1d590b2718a5707900ff0d566de2ab7f6f79906fed813361ff04f7b.scope. Apr 12 20:20:21.479924 systemd[1]: Started cri-containerd-9a4bb811f7abfd1aa6bd23ee53dbf12dff0e87692a683f68e111abb41f087fa4.scope. Apr 12 20:20:21.480554 systemd[1]: Started cri-containerd-c3f2de5fce2b528a3fea3d3d158e13703c07579405cb3f78abb308d2c1f7ae77.scope. Apr 12 20:20:21.502910 env[1477]: time="2024-04-12T20:20:21.502884583Z" level=info msg="StartContainer for \"9a4bb811f7abfd1aa6bd23ee53dbf12dff0e87692a683f68e111abb41f087fa4\" returns successfully" Apr 12 20:20:21.503038 env[1477]: time="2024-04-12T20:20:21.503018896Z" level=info msg="StartContainer for \"c3f2de5fce2b528a3fea3d3d158e13703c07579405cb3f78abb308d2c1f7ae77\" returns successfully" Apr 12 20:20:21.504248 env[1477]: time="2024-04-12T20:20:21.504225577Z" level=info msg="StartContainer for \"02d371e1e1d590b2718a5707900ff0d566de2ab7f6f79906fed813361ff04f7b\" returns successfully" Apr 12 20:20:21.520695 kubelet[2140]: W0412 20:20:21.520620 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.89.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:21.520695 kubelet[2140]: E0412 20:20:21.520678 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.89.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.89.23:6443: connect: connection refused Apr 12 20:20:22.019303 kubelet[2140]: I0412 20:20:22.019260 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:22.221811 kubelet[2140]: E0412 20:20:22.221792 2140 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.3-a-3fbc403199\" not found" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:22.322665 kubelet[2140]: I0412 20:20:22.322608 2140 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:22.343195 kubelet[2140]: E0412 20:20:22.343126 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:22.443324 kubelet[2140]: E0412 20:20:22.443283 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:22.543537 kubelet[2140]: E0412 20:20:22.543439 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:22.643859 kubelet[2140]: E0412 20:20:22.643652 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:22.743962 kubelet[2140]: E0412 20:20:22.743864 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:22.844584 kubelet[2140]: E0412 20:20:22.844478 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:22.945884 kubelet[2140]: E0412 20:20:22.945673 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.046225 kubelet[2140]: E0412 20:20:23.046126 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.147427 kubelet[2140]: E0412 20:20:23.147323 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.248594 kubelet[2140]: E0412 20:20:23.248427 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.349434 kubelet[2140]: E0412 20:20:23.349371 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.450466 kubelet[2140]: E0412 20:20:23.450375 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.551646 kubelet[2140]: E0412 20:20:23.551562 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.652760 kubelet[2140]: E0412 20:20:23.652700 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.753765 kubelet[2140]: E0412 20:20:23.753669 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.854882 kubelet[2140]: E0412 20:20:23.854688 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:23.955670 kubelet[2140]: E0412 20:20:23.955613 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:24.055896 kubelet[2140]: E0412 20:20:24.055799 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:24.157174 kubelet[2140]: E0412 20:20:24.156952 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:24.258185 kubelet[2140]: E0412 20:20:24.258085 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:24.359264 kubelet[2140]: E0412 20:20:24.359143 2140 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-3fbc403199\" not found" Apr 12 20:20:24.490749 kubelet[2140]: I0412 20:20:24.490513 2140 apiserver.go:52] "Watching apiserver" Apr 12 20:20:24.501510 kubelet[2140]: I0412 20:20:24.501419 2140 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 20:20:25.463052 systemd[1]: Reloading. Apr 12 20:20:25.490859 /usr/lib/systemd/system-generators/torcx-generator[2470]: time="2024-04-12T20:20:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 20:20:25.490874 /usr/lib/systemd/system-generators/torcx-generator[2470]: time="2024-04-12T20:20:25Z" level=info msg="torcx already run" Apr 12 20:20:25.556603 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 20:20:25.556616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 20:20:25.572848 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 20:20:25.640762 systemd[1]: Stopping kubelet.service... Apr 12 20:20:25.659730 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 20:20:25.659833 systemd[1]: Stopped kubelet.service. Apr 12 20:20:25.660712 systemd[1]: Started kubelet.service. Apr 12 20:20:25.685341 kubelet[2530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 20:20:25.685341 kubelet[2530]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 20:20:25.685341 kubelet[2530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 20:20:25.685563 kubelet[2530]: I0412 20:20:25.685342 2530 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 20:20:25.688318 kubelet[2530]: I0412 20:20:25.688279 2530 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Apr 12 20:20:25.688318 kubelet[2530]: I0412 20:20:25.688291 2530 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 20:20:25.688438 kubelet[2530]: I0412 20:20:25.688404 2530 server.go:895] "Client rotation is on, will bootstrap in background" Apr 12 20:20:25.689312 kubelet[2530]: I0412 20:20:25.689276 2530 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 20:20:25.689878 kubelet[2530]: I0412 20:20:25.689862 2530 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 20:20:25.727638 kubelet[2530]: I0412 20:20:25.727471 2530 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 20:20:25.728023 kubelet[2530]: I0412 20:20:25.727960 2530 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 20:20:25.728495 kubelet[2530]: I0412 20:20:25.728406 2530 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 20:20:25.728495 kubelet[2530]: I0412 20:20:25.728473 2530 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 20:20:25.728495 kubelet[2530]: I0412 20:20:25.728502 2530 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 20:20:25.729186 kubelet[2530]: I0412 20:20:25.728578 2530 state_mem.go:36] "Initialized new in-memory state store" Apr 12 20:20:25.729186 kubelet[2530]: I0412 20:20:25.728762 2530 kubelet.go:393] "Attempting to sync node with API server" Apr 12 20:20:25.729186 kubelet[2530]: I0412 20:20:25.728800 2530 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 20:20:25.729186 kubelet[2530]: I0412 20:20:25.728860 2530 kubelet.go:309] "Adding apiserver pod source" Apr 12 20:20:25.729186 kubelet[2530]: I0412 20:20:25.728915 2530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 20:20:25.730911 kubelet[2530]: I0412 20:20:25.730850 2530 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 20:20:25.732292 kubelet[2530]: I0412 20:20:25.732216 2530 server.go:1232] "Started kubelet" Apr 12 20:20:25.732812 kubelet[2530]: I0412 20:20:25.732702 2530 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 20:20:25.733052 kubelet[2530]: I0412 20:20:25.732879 2530 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 20:20:25.733551 kubelet[2530]: I0412 20:20:25.733483 2530 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 20:20:25.733551 kubelet[2530]: E0412 20:20:25.733523 2530 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 20:20:25.733872 kubelet[2530]: E0412 20:20:25.733610 2530 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 20:20:25.735753 kubelet[2530]: I0412 20:20:25.735677 2530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 20:20:25.736040 kubelet[2530]: I0412 20:20:25.735964 2530 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 20:20:25.736325 kubelet[2530]: I0412 20:20:25.736060 2530 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 20:20:25.736798 kubelet[2530]: I0412 20:20:25.736739 2530 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 20:20:25.737917 kubelet[2530]: I0412 20:20:25.737855 2530 server.go:462] "Adding debug handlers to kubelet server" Apr 12 20:20:25.753927 kubelet[2530]: I0412 20:20:25.753893 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 20:20:25.755385 kubelet[2530]: I0412 20:20:25.755366 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 20:20:25.755502 kubelet[2530]: I0412 20:20:25.755396 2530 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 20:20:25.755502 kubelet[2530]: I0412 20:20:25.755415 2530 kubelet.go:2303] "Starting kubelet main sync loop" Apr 12 20:20:25.755502 kubelet[2530]: E0412 20:20:25.755461 2530 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 20:20:25.776421 kubelet[2530]: I0412 20:20:25.776373 2530 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 20:20:25.776421 kubelet[2530]: I0412 20:20:25.776386 2530 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 20:20:25.776421 kubelet[2530]: I0412 20:20:25.776395 2530 state_mem.go:36] "Initialized new in-memory state store" Apr 12 20:20:25.776551 kubelet[2530]: I0412 20:20:25.776481 2530 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 20:20:25.776551 kubelet[2530]: I0412 20:20:25.776496 2530 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 20:20:25.776551 kubelet[2530]: I0412 20:20:25.776500 2530 policy_none.go:49] "None policy: Start" Apr 12 20:20:25.776806 kubelet[2530]: I0412 20:20:25.776769 2530 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 20:20:25.776806 kubelet[2530]: I0412 20:20:25.776780 2530 state_mem.go:35] "Initializing new in-memory state store" Apr 12 20:20:25.776874 kubelet[2530]: I0412 20:20:25.776846 2530 state_mem.go:75] "Updated machine memory state" Apr 12 20:20:25.779120 kubelet[2530]: I0412 20:20:25.779083 2530 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 20:20:25.779238 kubelet[2530]: I0412 20:20:25.779225 2530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 20:20:25.812775 sudo[2571]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 20:20:25.813244 sudo[2571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 20:20:25.839182 kubelet[2530]: I0412 20:20:25.839136 2530 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.845105 kubelet[2530]: I0412 20:20:25.845065 2530 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.845170 kubelet[2530]: I0412 20:20:25.845128 2530 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.856112 kubelet[2530]: I0412 20:20:25.856075 2530 topology_manager.go:215] "Topology Admit Handler" podUID="c2bedc281145207ea9119f88e3d7e69e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.856168 kubelet[2530]: I0412 20:20:25.856134 2530 topology_manager.go:215] "Topology Admit Handler" podUID="817f6d4e01418d8a61c32bdea5aeea7b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.856168 kubelet[2530]: I0412 20:20:25.856156 2530 topology_manager.go:215] "Topology Admit Handler" podUID="c82f4a5b84123c50a950b9ec06b53eb2" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.860219 kubelet[2530]: W0412 20:20:25.860208 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 20:20:25.861394 kubelet[2530]: W0412 20:20:25.861384 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 20:20:25.861498 kubelet[2530]: W0412 20:20:25.861488 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 20:20:25.936929 kubelet[2530]: I0412 20:20:25.936884 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c82f4a5b84123c50a950b9ec06b53eb2-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-3fbc403199\" (UID: \"c82f4a5b84123c50a950b9ec06b53eb2\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.936929 kubelet[2530]: I0412 20:20:25.936907 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2bedc281145207ea9119f88e3d7e69e-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-3fbc403199\" (UID: \"c2bedc281145207ea9119f88e3d7e69e\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.936929 kubelet[2530]: I0412 20:20:25.936920 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2bedc281145207ea9119f88e3d7e69e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-3fbc403199\" (UID: \"c2bedc281145207ea9119f88e3d7e69e\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.936929 kubelet[2530]: I0412 20:20:25.936932 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.937069 kubelet[2530]: I0412 20:20:25.936945 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2bedc281145207ea9119f88e3d7e69e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-3fbc403199\" (UID: \"c2bedc281145207ea9119f88e3d7e69e\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.937069 kubelet[2530]: I0412 20:20:25.936963 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.937069 kubelet[2530]: I0412 20:20:25.936979 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.937069 kubelet[2530]: I0412 20:20:25.936993 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:25.937069 kubelet[2530]: I0412 20:20:25.937006 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/817f6d4e01418d8a61c32bdea5aeea7b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-3fbc403199\" (UID: \"817f6d4e01418d8a61c32bdea5aeea7b\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:26.158977 sudo[2571]: pam_unix(sudo:session): session closed for user root Apr 12 20:20:26.729616 kubelet[2530]: I0412 20:20:26.729545 2530 apiserver.go:52] "Watching apiserver" Apr 12 20:20:26.737587 kubelet[2530]: I0412 20:20:26.737539 2530 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 20:20:26.766174 kubelet[2530]: W0412 20:20:26.766127 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 20:20:26.766174 kubelet[2530]: E0412 20:20:26.766171 2530 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.3-a-3fbc403199\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:26.766862 kubelet[2530]: W0412 20:20:26.766819 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 20:20:26.766862 kubelet[2530]: E0412 20:20:26.766854 2530 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.3-a-3fbc403199\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.3-a-3fbc403199" Apr 12 20:20:26.784759 kubelet[2530]: I0412 20:20:26.784690 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-3fbc403199" podStartSLOduration=1.784669646 podCreationTimestamp="2024-04-12 20:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 20:20:26.780443098 +0000 UTC m=+1.117732964" watchObservedRunningTime="2024-04-12 20:20:26.784669646 +0000 UTC m=+1.121959508" Apr 12 20:20:26.788973 kubelet[2530]: I0412 20:20:26.788962 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.3-a-3fbc403199" podStartSLOduration=1.7889461789999999 podCreationTimestamp="2024-04-12 20:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 20:20:26.78473017 +0000 UTC m=+1.122020036" watchObservedRunningTime="2024-04-12 20:20:26.788946179 +0000 UTC m=+1.126236044" Apr 12 20:20:27.094410 sudo[1596]: pam_unix(sudo:session): session closed for user root Apr 12 20:20:27.097123 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 12 20:20:27.102537 systemd[1]: sshd@4-139.178.89.23:22-147.75.109.163:41276.service: Deactivated successfully. Apr 12 20:20:27.104183 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 20:20:27.104620 systemd[1]: session-7.scope: Consumed 2.806s CPU time. Apr 12 20:20:27.105915 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. Apr 12 20:20:27.108013 systemd-logind[1465]: Removed session 7. Apr 12 20:20:31.163874 kubelet[2530]: I0412 20:20:31.163831 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.3-a-3fbc403199" podStartSLOduration=6.163809422 podCreationTimestamp="2024-04-12 20:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 20:20:26.789053432 +0000 UTC m=+1.126343299" watchObservedRunningTime="2024-04-12 20:20:31.163809422 +0000 UTC m=+5.501099286" Apr 12 20:20:38.369625 kubelet[2530]: I0412 20:20:38.369590 2530 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 20:20:38.370072 env[1477]: time="2024-04-12T20:20:38.370030823Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 20:20:38.370403 kubelet[2530]: I0412 20:20:38.370330 2530 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 20:20:38.438066 kubelet[2530]: I0412 20:20:38.437981 2530 topology_manager.go:215] "Topology Admit Handler" podUID="ea6ed027-a471-42c9-a2d0-7d380f03ead6" podNamespace="kube-system" podName="kube-proxy-qmhdm" Apr 12 20:20:38.447151 kubelet[2530]: I0412 20:20:38.447084 2530 topology_manager.go:215] "Topology Admit Handler" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" podNamespace="kube-system" podName="cilium-9chbz" Apr 12 20:20:38.455956 systemd[1]: Created slice kubepods-besteffort-podea6ed027_a471_42c9_a2d0_7d380f03ead6.slice. Apr 12 20:20:38.474216 systemd[1]: Created slice kubepods-burstable-podb59a9e6f_8fe4_4acc_9eeb_f862fd59b1b8.slice. Apr 12 20:20:38.616631 kubelet[2530]: I0412 20:20:38.616534 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea6ed027-a471-42c9-a2d0-7d380f03ead6-kube-proxy\") pod \"kube-proxy-qmhdm\" (UID: \"ea6ed027-a471-42c9-a2d0-7d380f03ead6\") " pod="kube-system/kube-proxy-qmhdm" Apr 12 20:20:38.616631 kubelet[2530]: I0412 20:20:38.616643 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq4vj\" (UniqueName: \"kubernetes.io/projected/ea6ed027-a471-42c9-a2d0-7d380f03ead6-kube-api-access-pq4vj\") pod \"kube-proxy-qmhdm\" (UID: \"ea6ed027-a471-42c9-a2d0-7d380f03ead6\") " pod="kube-system/kube-proxy-qmhdm" Apr 12 20:20:38.617041 kubelet[2530]: I0412 20:20:38.616740 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-clustermesh-secrets\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617041 kubelet[2530]: I0412 20:20:38.616852 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-etc-cni-netd\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617041 kubelet[2530]: I0412 20:20:38.616956 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-run\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617041 kubelet[2530]: I0412 20:20:38.617044 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-bpf-maps\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617565 kubelet[2530]: I0412 20:20:38.617126 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cni-path\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617565 kubelet[2530]: I0412 20:20:38.617292 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sln65\" (UniqueName: \"kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-kube-api-access-sln65\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617565 kubelet[2530]: I0412 20:20:38.617419 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea6ed027-a471-42c9-a2d0-7d380f03ead6-xtables-lock\") pod \"kube-proxy-qmhdm\" (UID: \"ea6ed027-a471-42c9-a2d0-7d380f03ead6\") " pod="kube-system/kube-proxy-qmhdm" Apr 12 20:20:38.617565 kubelet[2530]: I0412 20:20:38.617517 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea6ed027-a471-42c9-a2d0-7d380f03ead6-lib-modules\") pod \"kube-proxy-qmhdm\" (UID: \"ea6ed027-a471-42c9-a2d0-7d380f03ead6\") " pod="kube-system/kube-proxy-qmhdm" Apr 12 20:20:38.617965 kubelet[2530]: I0412 20:20:38.617643 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-config-path\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617965 kubelet[2530]: I0412 20:20:38.617710 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-net\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617965 kubelet[2530]: I0412 20:20:38.617769 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hubble-tls\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.617965 kubelet[2530]: I0412 20:20:38.617877 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hostproc\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.618373 kubelet[2530]: I0412 20:20:38.617976 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-cgroup\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.618373 kubelet[2530]: I0412 20:20:38.618071 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-kernel\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.618373 kubelet[2530]: I0412 20:20:38.618155 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-lib-modules\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.618373 kubelet[2530]: I0412 20:20:38.618259 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-xtables-lock\") pod \"cilium-9chbz\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " pod="kube-system/cilium-9chbz" Apr 12 20:20:38.734002 kubelet[2530]: E0412 20:20:38.733812 2530 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 12 20:20:38.734002 kubelet[2530]: E0412 20:20:38.733883 2530 projected.go:198] Error preparing data for projected volume kube-api-access-sln65 for pod kube-system/cilium-9chbz: configmap "kube-root-ca.crt" not found Apr 12 20:20:38.734515 kubelet[2530]: E0412 20:20:38.734023 2530 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-kube-api-access-sln65 podName:b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8 nodeName:}" failed. No retries permitted until 2024-04-12 20:20:39.233971562 +0000 UTC m=+13.571261498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sln65" (UniqueName: "kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-kube-api-access-sln65") pod "cilium-9chbz" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8") : configmap "kube-root-ca.crt" not found Apr 12 20:20:38.734880 kubelet[2530]: E0412 20:20:38.734826 2530 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 12 20:20:38.734880 kubelet[2530]: E0412 20:20:38.734878 2530 projected.go:198] Error preparing data for projected volume kube-api-access-pq4vj for pod kube-system/kube-proxy-qmhdm: configmap "kube-root-ca.crt" not found Apr 12 20:20:38.735253 kubelet[2530]: E0412 20:20:38.735001 2530 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea6ed027-a471-42c9-a2d0-7d380f03ead6-kube-api-access-pq4vj podName:ea6ed027-a471-42c9-a2d0-7d380f03ead6 nodeName:}" failed. No retries permitted until 2024-04-12 20:20:39.23494886 +0000 UTC m=+13.572238806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pq4vj" (UniqueName: "kubernetes.io/projected/ea6ed027-a471-42c9-a2d0-7d380f03ead6-kube-api-access-pq4vj") pod "kube-proxy-qmhdm" (UID: "ea6ed027-a471-42c9-a2d0-7d380f03ead6") : configmap "kube-root-ca.crt" not found Apr 12 20:20:39.373324 env[1477]: time="2024-04-12T20:20:39.373195701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmhdm,Uid:ea6ed027-a471-42c9-a2d0-7d380f03ead6,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:39.377361 env[1477]: time="2024-04-12T20:20:39.377286725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9chbz,Uid:b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:39.390995 env[1477]: time="2024-04-12T20:20:39.390909268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:39.391293 env[1477]: time="2024-04-12T20:20:39.391228694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:39.391293 env[1477]: time="2024-04-12T20:20:39.391270062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:39.391479 kubelet[2530]: I0412 20:20:39.391346 2530 topology_manager.go:215] "Topology Admit Handler" podUID="600200a3-df91-44c9-b282-bab02608e538" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-hql5w" Apr 12 20:20:39.391841 env[1477]: time="2024-04-12T20:20:39.391482469Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e446dc4450afd7f4beaabee9c7390f99a2fe6e7e01f4e1dd69ea2037a8c8fa6 pid=2682 runtime=io.containerd.runc.v2 Apr 12 20:20:39.393259 env[1477]: time="2024-04-12T20:20:39.393176299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:39.393259 env[1477]: time="2024-04-12T20:20:39.393220436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:39.393259 env[1477]: time="2024-04-12T20:20:39.393243452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:39.393449 env[1477]: time="2024-04-12T20:20:39.393414138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc pid=2691 runtime=io.containerd.runc.v2 Apr 12 20:20:39.396447 systemd[1]: Created slice kubepods-besteffort-pod600200a3_df91_44c9_b282_bab02608e538.slice. Apr 12 20:20:39.401553 systemd[1]: Started cri-containerd-7e446dc4450afd7f4beaabee9c7390f99a2fe6e7e01f4e1dd69ea2037a8c8fa6.scope. Apr 12 20:20:39.403432 systemd[1]: Started cri-containerd-33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc.scope. Apr 12 20:20:39.415842 env[1477]: time="2024-04-12T20:20:39.415814477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9chbz,Uid:b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\"" Apr 12 20:20:39.415942 env[1477]: time="2024-04-12T20:20:39.415895153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmhdm,Uid:ea6ed027-a471-42c9-a2d0-7d380f03ead6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e446dc4450afd7f4beaabee9c7390f99a2fe6e7e01f4e1dd69ea2037a8c8fa6\"" Apr 12 20:20:39.416734 env[1477]: time="2024-04-12T20:20:39.416716970Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 20:20:39.417209 env[1477]: time="2024-04-12T20:20:39.417191199Z" level=info msg="CreateContainer within sandbox \"7e446dc4450afd7f4beaabee9c7390f99a2fe6e7e01f4e1dd69ea2037a8c8fa6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 20:20:39.423204 env[1477]: time="2024-04-12T20:20:39.423150889Z" level=info msg="CreateContainer within sandbox \"7e446dc4450afd7f4beaabee9c7390f99a2fe6e7e01f4e1dd69ea2037a8c8fa6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fa8f8c60d15df3501bb8e2b014e940c8e8fd7a636efff56ea0d630427ca1678e\"" Apr 12 20:20:39.423479 env[1477]: time="2024-04-12T20:20:39.423435804Z" level=info msg="StartContainer for \"fa8f8c60d15df3501bb8e2b014e940c8e8fd7a636efff56ea0d630427ca1678e\"" Apr 12 20:20:39.432899 systemd[1]: Started cri-containerd-fa8f8c60d15df3501bb8e2b014e940c8e8fd7a636efff56ea0d630427ca1678e.scope. Apr 12 20:20:39.448998 env[1477]: time="2024-04-12T20:20:39.448968573Z" level=info msg="StartContainer for \"fa8f8c60d15df3501bb8e2b014e940c8e8fd7a636efff56ea0d630427ca1678e\" returns successfully" Apr 12 20:20:39.536483 kubelet[2530]: I0412 20:20:39.536411 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/600200a3-df91-44c9-b282-bab02608e538-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-hql5w\" (UID: \"600200a3-df91-44c9-b282-bab02608e538\") " pod="kube-system/cilium-operator-6bc8ccdb58-hql5w" Apr 12 20:20:39.536483 kubelet[2530]: I0412 20:20:39.536486 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4hxt\" (UniqueName: \"kubernetes.io/projected/600200a3-df91-44c9-b282-bab02608e538-kube-api-access-l4hxt\") pod \"cilium-operator-6bc8ccdb58-hql5w\" (UID: \"600200a3-df91-44c9-b282-bab02608e538\") " pod="kube-system/cilium-operator-6bc8ccdb58-hql5w" Apr 12 20:20:39.699875 env[1477]: time="2024-04-12T20:20:39.699654719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-hql5w,Uid:600200a3-df91-44c9-b282-bab02608e538,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:39.726022 env[1477]: time="2024-04-12T20:20:39.725796769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:39.726022 env[1477]: time="2024-04-12T20:20:39.725895476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:39.726022 env[1477]: time="2024-04-12T20:20:39.725933986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:39.726563 env[1477]: time="2024-04-12T20:20:39.726428129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6 pid=2879 runtime=io.containerd.runc.v2 Apr 12 20:20:39.753877 systemd[1]: Started cri-containerd-ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6.scope. Apr 12 20:20:39.781964 env[1477]: time="2024-04-12T20:20:39.781935187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-hql5w,Uid:600200a3-df91-44c9-b282-bab02608e538,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6\"" Apr 12 20:20:39.797180 kubelet[2530]: I0412 20:20:39.797162 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qmhdm" podStartSLOduration=1.7971400389999999 podCreationTimestamp="2024-04-12 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 20:20:39.796802534 +0000 UTC m=+14.134092401" watchObservedRunningTime="2024-04-12 20:20:39.797140039 +0000 UTC m=+14.134429901" Apr 12 20:20:39.978936 update_engine[1467]: I0412 20:20:39.978688 1467 update_attempter.cc:509] Updating boot flags... Apr 12 20:20:42.869576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098503850.mount: Deactivated successfully. Apr 12 20:20:44.939347 env[1477]: time="2024-04-12T20:20:44.939269387Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:44.941393 env[1477]: time="2024-04-12T20:20:44.941337059Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:44.944301 env[1477]: time="2024-04-12T20:20:44.944230214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:44.945824 env[1477]: time="2024-04-12T20:20:44.945761253Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 20:20:44.946782 env[1477]: time="2024-04-12T20:20:44.946715974Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 20:20:44.949676 env[1477]: time="2024-04-12T20:20:44.949592671Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 20:20:44.956792 env[1477]: time="2024-04-12T20:20:44.956773870Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\"" Apr 12 20:20:44.957122 env[1477]: time="2024-04-12T20:20:44.957107080Z" level=info msg="StartContainer for \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\"" Apr 12 20:20:44.965880 systemd[1]: Started cri-containerd-2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65.scope. Apr 12 20:20:44.977829 env[1477]: time="2024-04-12T20:20:44.977782409Z" level=info msg="StartContainer for \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\" returns successfully" Apr 12 20:20:44.983533 systemd[1]: cri-containerd-2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65.scope: Deactivated successfully. Apr 12 20:20:45.959460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65-rootfs.mount: Deactivated successfully. Apr 12 20:20:46.093987 env[1477]: time="2024-04-12T20:20:46.093851597Z" level=info msg="shim disconnected" id=2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65 Apr 12 20:20:46.093987 env[1477]: time="2024-04-12T20:20:46.093950097Z" level=warning msg="cleaning up after shim disconnected" id=2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65 namespace=k8s.io Apr 12 20:20:46.093987 env[1477]: time="2024-04-12T20:20:46.093979507Z" level=info msg="cleaning up dead shim" Apr 12 20:20:46.109323 env[1477]: time="2024-04-12T20:20:46.109207629Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:20:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3032 runtime=io.containerd.runc.v2\n" Apr 12 20:20:46.696900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478150033.mount: Deactivated successfully. Apr 12 20:20:46.806602 env[1477]: time="2024-04-12T20:20:46.806542947Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 20:20:46.812057 env[1477]: time="2024-04-12T20:20:46.812005829Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\"" Apr 12 20:20:46.812350 env[1477]: time="2024-04-12T20:20:46.812322433Z" level=info msg="StartContainer for \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\"" Apr 12 20:20:46.815173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810148215.mount: Deactivated successfully. Apr 12 20:20:46.822289 systemd[1]: Started cri-containerd-16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b.scope. Apr 12 20:20:46.834711 env[1477]: time="2024-04-12T20:20:46.834682167Z" level=info msg="StartContainer for \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\" returns successfully" Apr 12 20:20:46.841552 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 20:20:46.841751 systemd[1]: Stopped systemd-sysctl.service. Apr 12 20:20:46.841856 systemd[1]: Stopping systemd-sysctl.service... Apr 12 20:20:46.842699 systemd[1]: Starting systemd-sysctl.service... Apr 12 20:20:46.842936 systemd[1]: cri-containerd-16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b.scope: Deactivated successfully. Apr 12 20:20:46.846657 systemd[1]: Finished systemd-sysctl.service. Apr 12 20:20:46.878678 env[1477]: time="2024-04-12T20:20:46.878618956Z" level=info msg="shim disconnected" id=16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b Apr 12 20:20:46.878678 env[1477]: time="2024-04-12T20:20:46.878644826Z" level=warning msg="cleaning up after shim disconnected" id=16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b namespace=k8s.io Apr 12 20:20:46.878678 env[1477]: time="2024-04-12T20:20:46.878650668Z" level=info msg="cleaning up dead shim" Apr 12 20:20:46.882065 env[1477]: time="2024-04-12T20:20:46.882049368Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:20:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3094 runtime=io.containerd.runc.v2\n" Apr 12 20:20:47.233057 env[1477]: time="2024-04-12T20:20:47.233008122Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:47.233676 env[1477]: time="2024-04-12T20:20:47.233642029Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:47.234364 env[1477]: time="2024-04-12T20:20:47.234329941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 20:20:47.234675 env[1477]: time="2024-04-12T20:20:47.234625300Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 20:20:47.235956 env[1477]: time="2024-04-12T20:20:47.235925358Z" level=info msg="CreateContainer within sandbox \"ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 20:20:47.240532 env[1477]: time="2024-04-12T20:20:47.240515105Z" level=info msg="CreateContainer within sandbox \"ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\"" Apr 12 20:20:47.240932 env[1477]: time="2024-04-12T20:20:47.240906075Z" level=info msg="StartContainer for \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\"" Apr 12 20:20:47.249529 systemd[1]: Started cri-containerd-f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260.scope. Apr 12 20:20:47.261243 env[1477]: time="2024-04-12T20:20:47.261186193Z" level=info msg="StartContainer for \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\" returns successfully" Apr 12 20:20:47.808335 env[1477]: time="2024-04-12T20:20:47.808307162Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 20:20:47.812376 kubelet[2530]: I0412 20:20:47.812362 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-hql5w" podStartSLOduration=1.359987468 podCreationTimestamp="2024-04-12 20:20:39 +0000 UTC" firstStartedPulling="2024-04-12 20:20:39.782519334 +0000 UTC m=+14.119809198" lastFinishedPulling="2024-04-12 20:20:47.234871129 +0000 UTC m=+21.572160995" observedRunningTime="2024-04-12 20:20:47.812032316 +0000 UTC m=+22.149322182" watchObservedRunningTime="2024-04-12 20:20:47.812339265 +0000 UTC m=+22.149629128" Apr 12 20:20:47.815112 env[1477]: time="2024-04-12T20:20:47.815079405Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\"" Apr 12 20:20:47.815380 env[1477]: time="2024-04-12T20:20:47.815354596Z" level=info msg="StartContainer for \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\"" Apr 12 20:20:47.823539 systemd[1]: Started cri-containerd-02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538.scope. Apr 12 20:20:47.840188 systemd[1]: cri-containerd-02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538.scope: Deactivated successfully. Apr 12 20:20:47.848528 env[1477]: time="2024-04-12T20:20:47.848500987Z" level=info msg="StartContainer for \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\" returns successfully" Apr 12 20:20:47.989056 env[1477]: time="2024-04-12T20:20:47.988904651Z" level=info msg="shim disconnected" id=02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538 Apr 12 20:20:47.989056 env[1477]: time="2024-04-12T20:20:47.989026604Z" level=warning msg="cleaning up after shim disconnected" id=02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538 namespace=k8s.io Apr 12 20:20:47.989056 env[1477]: time="2024-04-12T20:20:47.989065925Z" level=info msg="cleaning up dead shim" Apr 12 20:20:48.005671 env[1477]: time="2024-04-12T20:20:48.005566608Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:20:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3200 runtime=io.containerd.runc.v2\n" Apr 12 20:20:48.824599 env[1477]: time="2024-04-12T20:20:48.824448958Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 20:20:48.840579 env[1477]: time="2024-04-12T20:20:48.840463450Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\"" Apr 12 20:20:48.841607 env[1477]: time="2024-04-12T20:20:48.841526999Z" level=info msg="StartContainer for \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\"" Apr 12 20:20:48.880754 systemd[1]: Started cri-containerd-8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820.scope. Apr 12 20:20:48.909164 env[1477]: time="2024-04-12T20:20:48.909101760Z" level=info msg="StartContainer for \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\" returns successfully" Apr 12 20:20:48.910778 systemd[1]: cri-containerd-8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820.scope: Deactivated successfully. Apr 12 20:20:48.945721 env[1477]: time="2024-04-12T20:20:48.945647085Z" level=info msg="shim disconnected" id=8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820 Apr 12 20:20:48.945858 env[1477]: time="2024-04-12T20:20:48.945730305Z" level=warning msg="cleaning up after shim disconnected" id=8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820 namespace=k8s.io Apr 12 20:20:48.945858 env[1477]: time="2024-04-12T20:20:48.945752097Z" level=info msg="cleaning up dead shim" Apr 12 20:20:48.951463 env[1477]: time="2024-04-12T20:20:48.951410218Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:20:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3253 runtime=io.containerd.runc.v2\n" Apr 12 20:20:48.957160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820-rootfs.mount: Deactivated successfully. Apr 12 20:20:49.830570 env[1477]: time="2024-04-12T20:20:49.830465628Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 20:20:49.848933 env[1477]: time="2024-04-12T20:20:49.848800761Z" level=info msg="CreateContainer within sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\"" Apr 12 20:20:49.849901 env[1477]: time="2024-04-12T20:20:49.849788304Z" level=info msg="StartContainer for \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\"" Apr 12 20:20:49.861808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997186104.mount: Deactivated successfully. Apr 12 20:20:49.876809 systemd[1]: Started cri-containerd-75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722.scope. Apr 12 20:20:49.898292 env[1477]: time="2024-04-12T20:20:49.898254766Z" level=info msg="StartContainer for \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\" returns successfully" Apr 12 20:20:49.968246 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Apr 12 20:20:49.980728 kubelet[2530]: I0412 20:20:49.980713 2530 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 20:20:49.991807 kubelet[2530]: I0412 20:20:49.991790 2530 topology_manager.go:215] "Topology Admit Handler" podUID="dd26df9a-36f5-486f-a82a-5a369a3c7b43" podNamespace="kube-system" podName="coredns-5dd5756b68-r6lqr" Apr 12 20:20:49.992345 kubelet[2530]: I0412 20:20:49.992334 2530 topology_manager.go:215] "Topology Admit Handler" podUID="5b5964ba-58f9-4be5-800c-e34d4eca27fa" podNamespace="kube-system" podName="coredns-5dd5756b68-q97jd" Apr 12 20:20:49.995338 systemd[1]: Created slice kubepods-burstable-poddd26df9a_36f5_486f_a82a_5a369a3c7b43.slice. Apr 12 20:20:49.997658 systemd[1]: Created slice kubepods-burstable-pod5b5964ba_58f9_4be5_800c_e34d4eca27fa.slice. Apr 12 20:20:50.002882 kubelet[2530]: I0412 20:20:50.002868 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd26df9a-36f5-486f-a82a-5a369a3c7b43-config-volume\") pod \"coredns-5dd5756b68-r6lqr\" (UID: \"dd26df9a-36f5-486f-a82a-5a369a3c7b43\") " pod="kube-system/coredns-5dd5756b68-r6lqr" Apr 12 20:20:50.002966 kubelet[2530]: I0412 20:20:50.002889 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jk6\" (UniqueName: \"kubernetes.io/projected/dd26df9a-36f5-486f-a82a-5a369a3c7b43-kube-api-access-b4jk6\") pod \"coredns-5dd5756b68-r6lqr\" (UID: \"dd26df9a-36f5-486f-a82a-5a369a3c7b43\") " pod="kube-system/coredns-5dd5756b68-r6lqr" Apr 12 20:20:50.002966 kubelet[2530]: I0412 20:20:50.002905 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b5964ba-58f9-4be5-800c-e34d4eca27fa-config-volume\") pod \"coredns-5dd5756b68-q97jd\" (UID: \"5b5964ba-58f9-4be5-800c-e34d4eca27fa\") " pod="kube-system/coredns-5dd5756b68-q97jd" Apr 12 20:20:50.002966 kubelet[2530]: I0412 20:20:50.002920 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggtd\" (UniqueName: \"kubernetes.io/projected/5b5964ba-58f9-4be5-800c-e34d4eca27fa-kube-api-access-7ggtd\") pod \"coredns-5dd5756b68-q97jd\" (UID: \"5b5964ba-58f9-4be5-800c-e34d4eca27fa\") " pod="kube-system/coredns-5dd5756b68-q97jd" Apr 12 20:20:50.120318 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Apr 12 20:20:50.298563 env[1477]: time="2024-04-12T20:20:50.298463449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r6lqr,Uid:dd26df9a-36f5-486f-a82a-5a369a3c7b43,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:50.300535 env[1477]: time="2024-04-12T20:20:50.300454064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q97jd,Uid:5b5964ba-58f9-4be5-800c-e34d4eca27fa,Namespace:kube-system,Attempt:0,}" Apr 12 20:20:50.869783 kubelet[2530]: I0412 20:20:50.869717 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9chbz" podStartSLOduration=7.339795559 podCreationTimestamp="2024-04-12 20:20:38 +0000 UTC" firstStartedPulling="2024-04-12 20:20:39.416473364 +0000 UTC m=+13.753763231" lastFinishedPulling="2024-04-12 20:20:44.946301517 +0000 UTC m=+19.283591424" observedRunningTime="2024-04-12 20:20:50.868675378 +0000 UTC m=+25.205965323" watchObservedRunningTime="2024-04-12 20:20:50.869623752 +0000 UTC m=+25.206913667" Apr 12 20:20:51.712208 systemd-networkd[1307]: cilium_host: Link UP Apr 12 20:20:51.712299 systemd-networkd[1307]: cilium_net: Link UP Apr 12 20:20:51.712301 systemd-networkd[1307]: cilium_net: Gained carrier Apr 12 20:20:51.712510 systemd-networkd[1307]: cilium_host: Gained carrier Apr 12 20:20:51.720247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 20:20:51.720485 systemd-networkd[1307]: cilium_host: Gained IPv6LL Apr 12 20:20:51.762266 systemd-networkd[1307]: cilium_vxlan: Link UP Apr 12 20:20:51.762269 systemd-networkd[1307]: cilium_vxlan: Gained carrier Apr 12 20:20:51.895298 kernel: NET: Registered PF_ALG protocol family Apr 12 20:20:52.262532 systemd-networkd[1307]: cilium_net: Gained IPv6LL Apr 12 20:20:52.565669 systemd-networkd[1307]: lxc_health: Link UP Apr 12 20:20:52.593149 systemd-networkd[1307]: lxc_health: Gained carrier Apr 12 20:20:52.593294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 20:20:52.864290 kernel: eth0: renamed from tmp2d906 Apr 12 20:20:52.873421 systemd-networkd[1307]: lxc9b5cd11db0f3: Link UP Apr 12 20:20:52.888160 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 20:20:52.888208 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9b5cd11db0f3: link becomes ready Apr 12 20:20:52.888167 systemd-networkd[1307]: lxc9b5cd11db0f3: Gained carrier Apr 12 20:20:52.888306 systemd-networkd[1307]: lxc23874adb2f6c: Link UP Apr 12 20:20:52.908320 kernel: eth0: renamed from tmpee2d1 Apr 12 20:20:52.925284 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc23874adb2f6c: link becomes ready Apr 12 20:20:52.925385 systemd-networkd[1307]: lxc23874adb2f6c: Gained carrier Apr 12 20:20:53.222379 systemd-networkd[1307]: cilium_vxlan: Gained IPv6LL Apr 12 20:20:53.798364 systemd-networkd[1307]: lxc_health: Gained IPv6LL Apr 12 20:20:54.630370 systemd-networkd[1307]: lxc23874adb2f6c: Gained IPv6LL Apr 12 20:20:54.951393 systemd-networkd[1307]: lxc9b5cd11db0f3: Gained IPv6LL Apr 12 20:20:55.185040 env[1477]: time="2024-04-12T20:20:55.185006610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:55.185040 env[1477]: time="2024-04-12T20:20:55.185027378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:55.185040 env[1477]: time="2024-04-12T20:20:55.185034223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:55.185307 env[1477]: time="2024-04-12T20:20:55.185096725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d90680f5c37acf1d66b1af833637b9edd4bd0a259444baefcd9b701f7b1893a pid=3935 runtime=io.containerd.runc.v2 Apr 12 20:20:55.185307 env[1477]: time="2024-04-12T20:20:55.185251480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:20:55.185307 env[1477]: time="2024-04-12T20:20:55.185269661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:20:55.185307 env[1477]: time="2024-04-12T20:20:55.185276776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:20:55.185386 env[1477]: time="2024-04-12T20:20:55.185341175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee2d15e30adf6883d581ef2a256c39ebf7e752ea4844fd709ecedded43060cbb pid=3936 runtime=io.containerd.runc.v2 Apr 12 20:20:55.193961 systemd[1]: Started cri-containerd-2d90680f5c37acf1d66b1af833637b9edd4bd0a259444baefcd9b701f7b1893a.scope. Apr 12 20:20:55.194574 systemd[1]: Started cri-containerd-ee2d15e30adf6883d581ef2a256c39ebf7e752ea4844fd709ecedded43060cbb.scope. Apr 12 20:20:55.215016 env[1477]: time="2024-04-12T20:20:55.214945792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r6lqr,Uid:dd26df9a-36f5-486f-a82a-5a369a3c7b43,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d90680f5c37acf1d66b1af833637b9edd4bd0a259444baefcd9b701f7b1893a\"" Apr 12 20:20:55.216013 env[1477]: time="2024-04-12T20:20:55.215996140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q97jd,Uid:5b5964ba-58f9-4be5-800c-e34d4eca27fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee2d15e30adf6883d581ef2a256c39ebf7e752ea4844fd709ecedded43060cbb\"" Apr 12 20:20:55.216279 env[1477]: time="2024-04-12T20:20:55.216265766Z" level=info msg="CreateContainer within sandbox \"2d90680f5c37acf1d66b1af833637b9edd4bd0a259444baefcd9b701f7b1893a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 20:20:55.217318 env[1477]: time="2024-04-12T20:20:55.217293089Z" level=info msg="CreateContainer within sandbox \"ee2d15e30adf6883d581ef2a256c39ebf7e752ea4844fd709ecedded43060cbb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 20:20:55.222182 env[1477]: time="2024-04-12T20:20:55.222140464Z" level=info msg="CreateContainer within sandbox \"2d90680f5c37acf1d66b1af833637b9edd4bd0a259444baefcd9b701f7b1893a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"888316c14ba16388ee4822ce90098c67f9c44f3e08b571baf781b76ef0692db3\"" Apr 12 20:20:55.222443 env[1477]: time="2024-04-12T20:20:55.222389731Z" level=info msg="StartContainer for \"888316c14ba16388ee4822ce90098c67f9c44f3e08b571baf781b76ef0692db3\"" Apr 12 20:20:55.223285 env[1477]: time="2024-04-12T20:20:55.223269427Z" level=info msg="CreateContainer within sandbox \"ee2d15e30adf6883d581ef2a256c39ebf7e752ea4844fd709ecedded43060cbb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32dacc508f985a86e982ed61f890c885bdae27994a6197938e4af37755cdbf10\"" Apr 12 20:20:55.223468 env[1477]: time="2024-04-12T20:20:55.223448665Z" level=info msg="StartContainer for \"32dacc508f985a86e982ed61f890c885bdae27994a6197938e4af37755cdbf10\"" Apr 12 20:20:55.237614 systemd[1]: Started cri-containerd-888316c14ba16388ee4822ce90098c67f9c44f3e08b571baf781b76ef0692db3.scope. Apr 12 20:20:55.239025 systemd[1]: Started cri-containerd-32dacc508f985a86e982ed61f890c885bdae27994a6197938e4af37755cdbf10.scope. Apr 12 20:20:55.250907 env[1477]: time="2024-04-12T20:20:55.250882356Z" level=info msg="StartContainer for \"888316c14ba16388ee4822ce90098c67f9c44f3e08b571baf781b76ef0692db3\" returns successfully" Apr 12 20:20:55.252540 env[1477]: time="2024-04-12T20:20:55.252520302Z" level=info msg="StartContainer for \"32dacc508f985a86e982ed61f890c885bdae27994a6197938e4af37755cdbf10\" returns successfully" Apr 12 20:20:55.849512 kubelet[2530]: I0412 20:20:55.849491 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q97jd" podStartSLOduration=16.849461094 podCreationTimestamp="2024-04-12 20:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 20:20:55.848997632 +0000 UTC m=+30.186287500" watchObservedRunningTime="2024-04-12 20:20:55.849461094 +0000 UTC m=+30.186750961" Apr 12 20:20:55.854540 kubelet[2530]: I0412 20:20:55.854520 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-r6lqr" podStartSLOduration=16.854486453 podCreationTimestamp="2024-04-12 20:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 20:20:55.854119276 +0000 UTC m=+30.191409143" watchObservedRunningTime="2024-04-12 20:20:55.854486453 +0000 UTC m=+30.191776319" Apr 12 20:21:02.385444 kubelet[2530]: I0412 20:21:02.385335 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 12 20:26:00.196657 systemd[1]: Started sshd@5-139.178.89.23:22-147.75.109.163:53384.service. Apr 12 20:26:00.226597 sshd[4145]: Accepted publickey for core from 147.75.109.163 port 53384 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:00.227541 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:00.230679 systemd-logind[1465]: New session 8 of user core. Apr 12 20:26:00.231394 systemd[1]: Started session-8.scope. Apr 12 20:26:00.324213 sshd[4145]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:00.325660 systemd[1]: sshd@5-139.178.89.23:22-147.75.109.163:53384.service: Deactivated successfully. Apr 12 20:26:00.326101 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 20:26:00.326406 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. Apr 12 20:26:00.326801 systemd-logind[1465]: Removed session 8. Apr 12 20:26:05.333245 systemd[1]: Started sshd@6-139.178.89.23:22-147.75.109.163:53386.service. Apr 12 20:26:05.362982 sshd[4172]: Accepted publickey for core from 147.75.109.163 port 53386 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:05.363689 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:05.365878 systemd-logind[1465]: New session 9 of user core. Apr 12 20:26:05.366405 systemd[1]: Started session-9.scope. Apr 12 20:26:05.451560 sshd[4172]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:05.453022 systemd[1]: sshd@6-139.178.89.23:22-147.75.109.163:53386.service: Deactivated successfully. Apr 12 20:26:05.453473 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 20:26:05.453890 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. Apr 12 20:26:05.454450 systemd-logind[1465]: Removed session 9. Apr 12 20:26:10.461497 systemd[1]: Started sshd@7-139.178.89.23:22-147.75.109.163:43744.service. Apr 12 20:26:10.491292 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 43744 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:10.492210 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:10.495087 systemd-logind[1465]: New session 10 of user core. Apr 12 20:26:10.495766 systemd[1]: Started session-10.scope. Apr 12 20:26:10.584228 sshd[4201]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:10.585713 systemd[1]: sshd@7-139.178.89.23:22-147.75.109.163:43744.service: Deactivated successfully. Apr 12 20:26:10.586127 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 20:26:10.586517 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. Apr 12 20:26:10.587083 systemd-logind[1465]: Removed session 10. Apr 12 20:26:15.590576 systemd[1]: Started sshd@8-139.178.89.23:22-147.75.109.163:43756.service. Apr 12 20:26:15.622500 sshd[4228]: Accepted publickey for core from 147.75.109.163 port 43756 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:15.623252 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:15.625744 systemd-logind[1465]: New session 11 of user core. Apr 12 20:26:15.626329 systemd[1]: Started session-11.scope. Apr 12 20:26:15.715354 sshd[4228]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:15.716868 systemd[1]: sshd@8-139.178.89.23:22-147.75.109.163:43756.service: Deactivated successfully. Apr 12 20:26:15.717284 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 20:26:15.717701 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. Apr 12 20:26:15.718174 systemd-logind[1465]: Removed session 11. Apr 12 20:26:18.982463 update_engine[1467]: I0412 20:26:18.982412 1467 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 12 20:26:18.982463 update_engine[1467]: I0412 20:26:18.982434 1467 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 12 20:26:18.983529 update_engine[1467]: I0412 20:26:18.983492 1467 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 12 20:26:18.983714 update_engine[1467]: I0412 20:26:18.983678 1467 omaha_request_params.cc:62] Current group set to lts Apr 12 20:26:18.983752 update_engine[1467]: I0412 20:26:18.983745 1467 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 12 20:26:18.983752 update_engine[1467]: I0412 20:26:18.983748 1467 update_attempter.cc:643] Scheduling an action processor start. Apr 12 20:26:18.983792 update_engine[1467]: I0412 20:26:18.983757 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 12 20:26:18.983792 update_engine[1467]: I0412 20:26:18.983772 1467 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 12 20:26:18.983830 update_engine[1467]: I0412 20:26:18.983798 1467 omaha_request_action.cc:270] Posting an Omaha request to disabled Apr 12 20:26:18.983830 update_engine[1467]: I0412 20:26:18.983803 1467 omaha_request_action.cc:271] Request: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: Apr 12 20:26:18.983830 update_engine[1467]: I0412 20:26:18.983804 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 20:26:18.984048 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 12 20:26:18.984492 update_engine[1467]: I0412 20:26:18.984455 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 20:26:18.984536 update_engine[1467]: E0412 20:26:18.984511 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 20:26:18.984562 update_engine[1467]: I0412 20:26:18.984546 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 12 20:26:20.723866 systemd[1]: Started sshd@9-139.178.89.23:22-147.75.109.163:40828.service. Apr 12 20:26:20.753120 sshd[4254]: Accepted publickey for core from 147.75.109.163 port 40828 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:20.753954 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:20.756900 systemd-logind[1465]: New session 12 of user core. Apr 12 20:26:20.757430 systemd[1]: Started session-12.scope. Apr 12 20:26:20.882470 sshd[4254]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:20.884331 systemd[1]: sshd@9-139.178.89.23:22-147.75.109.163:40828.service: Deactivated successfully. Apr 12 20:26:20.884676 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 20:26:20.885057 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. Apr 12 20:26:20.885645 systemd[1]: Started sshd@10-139.178.89.23:22-147.75.109.163:40838.service. Apr 12 20:26:20.886089 systemd-logind[1465]: Removed session 12. Apr 12 20:26:20.914175 sshd[4280]: Accepted publickey for core from 147.75.109.163 port 40838 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:20.914919 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:20.917665 systemd-logind[1465]: New session 13 of user core. Apr 12 20:26:20.918174 systemd[1]: Started session-13.scope. Apr 12 20:26:21.303649 sshd[4280]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:21.305378 systemd[1]: sshd@10-139.178.89.23:22-147.75.109.163:40838.service: Deactivated successfully. Apr 12 20:26:21.305740 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 20:26:21.306048 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. Apr 12 20:26:21.306705 systemd[1]: Started sshd@11-139.178.89.23:22-147.75.109.163:40852.service. Apr 12 20:26:21.307131 systemd-logind[1465]: Removed session 13. Apr 12 20:26:21.335719 sshd[4305]: Accepted publickey for core from 147.75.109.163 port 40852 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:21.336468 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:21.338935 systemd-logind[1465]: New session 14 of user core. Apr 12 20:26:21.339426 systemd[1]: Started session-14.scope. Apr 12 20:26:21.448147 sshd[4305]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:21.449683 systemd[1]: sshd@11-139.178.89.23:22-147.75.109.163:40852.service: Deactivated successfully. Apr 12 20:26:21.450119 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 20:26:21.450545 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. Apr 12 20:26:21.451032 systemd-logind[1465]: Removed session 14. Apr 12 20:26:26.457739 systemd[1]: Started sshd@12-139.178.89.23:22-147.75.109.163:40866.service. Apr 12 20:26:26.487666 sshd[4334]: Accepted publickey for core from 147.75.109.163 port 40866 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:26.488570 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:26.491586 systemd-logind[1465]: New session 15 of user core. Apr 12 20:26:26.492195 systemd[1]: Started session-15.scope. Apr 12 20:26:26.573747 sshd[4334]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:26.575548 systemd[1]: sshd@12-139.178.89.23:22-147.75.109.163:40866.service: Deactivated successfully. Apr 12 20:26:26.575917 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 20:26:26.576306 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. Apr 12 20:26:26.576883 systemd[1]: Started sshd@13-139.178.89.23:22-147.75.109.163:40882.service. Apr 12 20:26:26.577274 systemd-logind[1465]: Removed session 15. Apr 12 20:26:26.606139 sshd[4359]: Accepted publickey for core from 147.75.109.163 port 40882 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:26.606873 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:26.609430 systemd-logind[1465]: New session 16 of user core. Apr 12 20:26:26.609984 systemd[1]: Started session-16.scope. Apr 12 20:26:26.714407 sshd[4359]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:26.716350 systemd[1]: sshd@13-139.178.89.23:22-147.75.109.163:40882.service: Deactivated successfully. Apr 12 20:26:26.716727 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 20:26:26.717086 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. Apr 12 20:26:26.717734 systemd[1]: Started sshd@14-139.178.89.23:22-147.75.109.163:40888.service. Apr 12 20:26:26.718153 systemd-logind[1465]: Removed session 16. Apr 12 20:26:26.807819 sshd[4380]: Accepted publickey for core from 147.75.109.163 port 40888 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:26.810343 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:26.818097 systemd-logind[1465]: New session 17 of user core. Apr 12 20:26:26.819954 systemd[1]: Started session-17.scope. Apr 12 20:26:27.721526 sshd[4380]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:27.731478 systemd[1]: sshd@14-139.178.89.23:22-147.75.109.163:40888.service: Deactivated successfully. Apr 12 20:26:27.734271 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 20:26:27.736666 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. Apr 12 20:26:27.740635 systemd[1]: Started sshd@15-139.178.89.23:22-147.75.109.163:47668.service. Apr 12 20:26:27.743132 systemd-logind[1465]: Removed session 17. Apr 12 20:26:27.799415 sshd[4411]: Accepted publickey for core from 147.75.109.163 port 47668 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:27.802915 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:27.813653 systemd-logind[1465]: New session 18 of user core. Apr 12 20:26:27.816602 systemd[1]: Started session-18.scope. Apr 12 20:26:28.052697 sshd[4411]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:28.054444 systemd[1]: sshd@15-139.178.89.23:22-147.75.109.163:47668.service: Deactivated successfully. Apr 12 20:26:28.054773 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 20:26:28.055115 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. Apr 12 20:26:28.055733 systemd[1]: Started sshd@16-139.178.89.23:22-147.75.109.163:47684.service. Apr 12 20:26:28.056245 systemd-logind[1465]: Removed session 18. Apr 12 20:26:28.084311 sshd[4434]: Accepted publickey for core from 147.75.109.163 port 47684 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:28.085073 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:28.087413 systemd-logind[1465]: New session 19 of user core. Apr 12 20:26:28.087942 systemd[1]: Started session-19.scope. Apr 12 20:26:28.211631 sshd[4434]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:28.213029 systemd[1]: sshd@16-139.178.89.23:22-147.75.109.163:47684.service: Deactivated successfully. Apr 12 20:26:28.213464 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 20:26:28.213843 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. Apr 12 20:26:28.214328 systemd-logind[1465]: Removed session 19. Apr 12 20:26:28.979286 update_engine[1467]: I0412 20:26:28.979173 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 20:26:28.980428 update_engine[1467]: I0412 20:26:28.979780 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 20:26:28.980428 update_engine[1467]: E0412 20:26:28.980048 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 20:26:28.980428 update_engine[1467]: I0412 20:26:28.980329 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 12 20:26:33.221423 systemd[1]: Started sshd@17-139.178.89.23:22-147.75.109.163:47700.service. Apr 12 20:26:33.249979 sshd[4464]: Accepted publickey for core from 147.75.109.163 port 47700 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:33.250854 sshd[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:33.253782 systemd-logind[1465]: New session 20 of user core. Apr 12 20:26:33.254401 systemd[1]: Started session-20.scope. Apr 12 20:26:33.337786 sshd[4464]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:33.339267 systemd[1]: sshd@17-139.178.89.23:22-147.75.109.163:47700.service: Deactivated successfully. Apr 12 20:26:33.339683 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 20:26:33.340043 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. Apr 12 20:26:33.340576 systemd-logind[1465]: Removed session 20. Apr 12 20:26:38.346650 systemd[1]: Started sshd@18-139.178.89.23:22-147.75.109.163:47110.service. Apr 12 20:26:38.375223 sshd[4489]: Accepted publickey for core from 147.75.109.163 port 47110 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:38.376011 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:38.378911 systemd-logind[1465]: New session 21 of user core. Apr 12 20:26:38.379491 systemd[1]: Started session-21.scope. Apr 12 20:26:38.466884 sshd[4489]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:38.468364 systemd[1]: sshd@18-139.178.89.23:22-147.75.109.163:47110.service: Deactivated successfully. Apr 12 20:26:38.468776 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 20:26:38.469133 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. Apr 12 20:26:38.469782 systemd-logind[1465]: Removed session 21. Apr 12 20:26:38.979313 update_engine[1467]: I0412 20:26:38.979214 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 20:26:38.980535 update_engine[1467]: I0412 20:26:38.979801 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 20:26:38.980535 update_engine[1467]: E0412 20:26:38.980067 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 20:26:38.980535 update_engine[1467]: I0412 20:26:38.980348 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 12 20:26:43.476538 systemd[1]: Started sshd@19-139.178.89.23:22-147.75.109.163:47126.service. Apr 12 20:26:43.505640 sshd[4516]: Accepted publickey for core from 147.75.109.163 port 47126 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:43.506484 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:43.509321 systemd-logind[1465]: New session 22 of user core. Apr 12 20:26:43.509994 systemd[1]: Started session-22.scope. Apr 12 20:26:43.597224 sshd[4516]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:43.598911 systemd[1]: sshd@19-139.178.89.23:22-147.75.109.163:47126.service: Deactivated successfully. Apr 12 20:26:43.599275 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 20:26:43.599595 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. Apr 12 20:26:43.600298 systemd[1]: Started sshd@20-139.178.89.23:22-147.75.109.163:47136.service. Apr 12 20:26:43.600837 systemd-logind[1465]: Removed session 22. Apr 12 20:26:43.628662 sshd[4541]: Accepted publickey for core from 147.75.109.163 port 47136 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:43.629456 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:43.632005 systemd-logind[1465]: New session 23 of user core. Apr 12 20:26:43.632642 systemd[1]: Started session-23.scope. Apr 12 20:26:44.971918 env[1477]: time="2024-04-12T20:26:44.971763847Z" level=info msg="StopContainer for \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\" with timeout 30 (s)" Apr 12 20:26:44.973037 env[1477]: time="2024-04-12T20:26:44.972594917Z" level=info msg="Stop container \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\" with signal terminated" Apr 12 20:26:44.991462 systemd[1]: cri-containerd-f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260.scope: Deactivated successfully. Apr 12 20:26:45.003299 env[1477]: time="2024-04-12T20:26:45.003255329Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 20:26:45.005802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260-rootfs.mount: Deactivated successfully. Apr 12 20:26:45.007857 env[1477]: time="2024-04-12T20:26:45.007837311Z" level=info msg="StopContainer for \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\" with timeout 2 (s)" Apr 12 20:26:45.007973 env[1477]: time="2024-04-12T20:26:45.007957848Z" level=info msg="Stop container \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\" with signal terminated" Apr 12 20:26:45.011624 systemd-networkd[1307]: lxc_health: Link DOWN Apr 12 20:26:45.011629 systemd-networkd[1307]: lxc_health: Lost carrier Apr 12 20:26:45.021292 env[1477]: time="2024-04-12T20:26:45.021264909Z" level=info msg="shim disconnected" id=f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260 Apr 12 20:26:45.021390 env[1477]: time="2024-04-12T20:26:45.021294651Z" level=warning msg="cleaning up after shim disconnected" id=f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260 namespace=k8s.io Apr 12 20:26:45.021390 env[1477]: time="2024-04-12T20:26:45.021303151Z" level=info msg="cleaning up dead shim" Apr 12 20:26:45.025953 env[1477]: time="2024-04-12T20:26:45.025930688Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4611 runtime=io.containerd.runc.v2\n" Apr 12 20:26:45.026776 env[1477]: time="2024-04-12T20:26:45.026729337Z" level=info msg="StopContainer for \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\" returns successfully" Apr 12 20:26:45.027148 env[1477]: time="2024-04-12T20:26:45.027131046Z" level=info msg="StopPodSandbox for \"ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6\"" Apr 12 20:26:45.027194 env[1477]: time="2024-04-12T20:26:45.027170446Z" level=info msg="Container to stop \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 20:26:45.028610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6-shm.mount: Deactivated successfully. Apr 12 20:26:45.031209 systemd[1]: cri-containerd-ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6.scope: Deactivated successfully. Apr 12 20:26:45.042982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6-rootfs.mount: Deactivated successfully. Apr 12 20:26:45.043603 env[1477]: time="2024-04-12T20:26:45.043562310Z" level=info msg="shim disconnected" id=ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6 Apr 12 20:26:45.043683 env[1477]: time="2024-04-12T20:26:45.043605903Z" level=warning msg="cleaning up after shim disconnected" id=ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6 namespace=k8s.io Apr 12 20:26:45.043683 env[1477]: time="2024-04-12T20:26:45.043616126Z" level=info msg="cleaning up dead shim" Apr 12 20:26:45.048254 env[1477]: time="2024-04-12T20:26:45.048220713Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4641 runtime=io.containerd.runc.v2\n" Apr 12 20:26:45.048477 env[1477]: time="2024-04-12T20:26:45.048434655Z" level=info msg="TearDown network for sandbox \"ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6\" successfully" Apr 12 20:26:45.048477 env[1477]: time="2024-04-12T20:26:45.048452034Z" level=info msg="StopPodSandbox for \"ac2b5465f8d4b44c0f78501afea8fff1ba4dcad021cf43e41ecc707175f6aed6\" returns successfully" Apr 12 20:26:45.079718 systemd[1]: cri-containerd-75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722.scope: Deactivated successfully. Apr 12 20:26:45.080019 systemd[1]: cri-containerd-75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722.scope: Consumed 6.396s CPU time. Apr 12 20:26:45.107618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722-rootfs.mount: Deactivated successfully. Apr 12 20:26:45.119645 env[1477]: time="2024-04-12T20:26:45.119528183Z" level=info msg="shim disconnected" id=75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722 Apr 12 20:26:45.119645 env[1477]: time="2024-04-12T20:26:45.119611256Z" level=warning msg="cleaning up after shim disconnected" id=75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722 namespace=k8s.io Apr 12 20:26:45.119645 env[1477]: time="2024-04-12T20:26:45.119634934Z" level=info msg="cleaning up dead shim" Apr 12 20:26:45.132692 env[1477]: time="2024-04-12T20:26:45.132625444Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4668 runtime=io.containerd.runc.v2\n" Apr 12 20:26:45.134535 env[1477]: time="2024-04-12T20:26:45.134474362Z" level=info msg="StopContainer for \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\" returns successfully" Apr 12 20:26:45.135343 env[1477]: time="2024-04-12T20:26:45.135286898Z" level=info msg="StopPodSandbox for \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\"" Apr 12 20:26:45.135496 env[1477]: time="2024-04-12T20:26:45.135398262Z" level=info msg="Container to stop \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 20:26:45.135496 env[1477]: time="2024-04-12T20:26:45.135432344Z" level=info msg="Container to stop \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 20:26:45.135496 env[1477]: time="2024-04-12T20:26:45.135457921Z" level=info msg="Container to stop \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 20:26:45.135496 env[1477]: time="2024-04-12T20:26:45.135482650Z" level=info msg="Container to stop \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 20:26:45.135912 env[1477]: time="2024-04-12T20:26:45.135507611Z" level=info msg="Container to stop \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 20:26:45.146119 systemd[1]: cri-containerd-33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc.scope: Deactivated successfully. Apr 12 20:26:45.151658 kubelet[2530]: I0412 20:26:45.151603 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4hxt\" (UniqueName: \"kubernetes.io/projected/600200a3-df91-44c9-b282-bab02608e538-kube-api-access-l4hxt\") pod \"600200a3-df91-44c9-b282-bab02608e538\" (UID: \"600200a3-df91-44c9-b282-bab02608e538\") " Apr 12 20:26:45.152352 kubelet[2530]: I0412 20:26:45.151729 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/600200a3-df91-44c9-b282-bab02608e538-cilium-config-path\") pod \"600200a3-df91-44c9-b282-bab02608e538\" (UID: \"600200a3-df91-44c9-b282-bab02608e538\") " Apr 12 20:26:45.156045 kubelet[2530]: I0412 20:26:45.155948 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600200a3-df91-44c9-b282-bab02608e538-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "600200a3-df91-44c9-b282-bab02608e538" (UID: "600200a3-df91-44c9-b282-bab02608e538"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 20:26:45.157041 kubelet[2530]: I0412 20:26:45.156930 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600200a3-df91-44c9-b282-bab02608e538-kube-api-access-l4hxt" (OuterVolumeSpecName: "kube-api-access-l4hxt") pod "600200a3-df91-44c9-b282-bab02608e538" (UID: "600200a3-df91-44c9-b282-bab02608e538"). InnerVolumeSpecName "kube-api-access-l4hxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 20:26:45.187933 env[1477]: time="2024-04-12T20:26:45.187856703Z" level=info msg="shim disconnected" id=33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc Apr 12 20:26:45.188159 env[1477]: time="2024-04-12T20:26:45.187940538Z" level=warning msg="cleaning up after shim disconnected" id=33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc namespace=k8s.io Apr 12 20:26:45.188159 env[1477]: time="2024-04-12T20:26:45.187968363Z" level=info msg="cleaning up dead shim" Apr 12 20:26:45.197138 env[1477]: time="2024-04-12T20:26:45.197089055Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4699 runtime=io.containerd.runc.v2\n" Apr 12 20:26:45.197554 env[1477]: time="2024-04-12T20:26:45.197485947Z" level=info msg="TearDown network for sandbox \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" successfully" Apr 12 20:26:45.197554 env[1477]: time="2024-04-12T20:26:45.197519198Z" level=info msg="StopPodSandbox for \"33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc\" returns successfully" Apr 12 20:26:45.252796 kubelet[2530]: I0412 20:26:45.252571 2530 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l4hxt\" (UniqueName: \"kubernetes.io/projected/600200a3-df91-44c9-b282-bab02608e538-kube-api-access-l4hxt\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.252796 kubelet[2530]: I0412 20:26:45.252651 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/600200a3-df91-44c9-b282-bab02608e538-cilium-config-path\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.353369 kubelet[2530]: I0412 20:26:45.353287 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cni-path\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.353686 kubelet[2530]: I0412 20:26:45.353415 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-xtables-lock\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.353686 kubelet[2530]: I0412 20:26:45.353456 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cni-path" (OuterVolumeSpecName: "cni-path") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.353686 kubelet[2530]: I0412 20:26:45.353543 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-clustermesh-secrets\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.353686 kubelet[2530]: I0412 20:26:45.353555 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.353686 kubelet[2530]: I0412 20:26:45.353636 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-cgroup\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.354292 kubelet[2530]: I0412 20:26:45.353712 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hostproc\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.354292 kubelet[2530]: I0412 20:26:45.353814 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-kernel\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.354292 kubelet[2530]: I0412 20:26:45.353802 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.354292 kubelet[2530]: I0412 20:26:45.353865 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hostproc" (OuterVolumeSpecName: "hostproc") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.354292 kubelet[2530]: I0412 20:26:45.353913 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-config-path\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.354861 kubelet[2530]: I0412 20:26:45.353946 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.354861 kubelet[2530]: I0412 20:26:45.353997 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-run\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.354861 kubelet[2530]: I0412 20:26:45.354072 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-net\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.354861 kubelet[2530]: I0412 20:26:45.354146 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.354861 kubelet[2530]: I0412 20:26:45.354205 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-etc-cni-netd\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.355407 kubelet[2530]: I0412 20:26:45.354214 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.355407 kubelet[2530]: I0412 20:26:45.354309 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.355407 kubelet[2530]: I0412 20:26:45.354348 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-lib-modules\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.355407 kubelet[2530]: I0412 20:26:45.354402 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.355407 kubelet[2530]: I0412 20:26:45.354453 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-bpf-maps\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.355935 kubelet[2530]: I0412 20:26:45.354557 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sln65\" (UniqueName: \"kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-kube-api-access-sln65\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.355935 kubelet[2530]: I0412 20:26:45.354559 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:45.355935 kubelet[2530]: I0412 20:26:45.354653 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hubble-tls\") pod \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\" (UID: \"b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8\") " Apr 12 20:26:45.355935 kubelet[2530]: I0412 20:26:45.354796 2530 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-etc-cni-netd\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.355935 kubelet[2530]: I0412 20:26:45.354863 2530 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-bpf-maps\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.355935 kubelet[2530]: I0412 20:26:45.354922 2530 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-lib-modules\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.355935 kubelet[2530]: I0412 20:26:45.354979 2530 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cni-path\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.356637 kubelet[2530]: I0412 20:26:45.355038 2530 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-xtables-lock\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.356637 kubelet[2530]: I0412 20:26:45.355098 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-cgroup\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.356637 kubelet[2530]: I0412 20:26:45.355154 2530 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hostproc\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.356637 kubelet[2530]: I0412 20:26:45.355218 2530 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.356637 kubelet[2530]: I0412 20:26:45.355297 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-run\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.356637 kubelet[2530]: I0412 20:26:45.355362 2530 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-host-proc-sys-net\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.359465 kubelet[2530]: I0412 20:26:45.359368 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 20:26:45.360421 kubelet[2530]: I0412 20:26:45.360302 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 20:26:45.361017 kubelet[2530]: I0412 20:26:45.360900 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-kube-api-access-sln65" (OuterVolumeSpecName: "kube-api-access-sln65") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "kube-api-access-sln65". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 20:26:45.361281 kubelet[2530]: I0412 20:26:45.361196 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" (UID: "b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 20:26:45.456001 kubelet[2530]: I0412 20:26:45.455898 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-cilium-config-path\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.456001 kubelet[2530]: I0412 20:26:45.455974 2530 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sln65\" (UniqueName: \"kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-kube-api-access-sln65\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.456001 kubelet[2530]: I0412 20:26:45.456013 2530 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-hubble-tls\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.456551 kubelet[2530]: I0412 20:26:45.456051 2530 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8-clustermesh-secrets\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:45.760547 systemd[1]: Removed slice kubepods-burstable-podb59a9e6f_8fe4_4acc_9eeb_f862fd59b1b8.slice. Apr 12 20:26:45.760611 systemd[1]: kubepods-burstable-podb59a9e6f_8fe4_4acc_9eeb_f862fd59b1b8.slice: Consumed 6.460s CPU time. Apr 12 20:26:45.761308 systemd[1]: Removed slice kubepods-besteffort-pod600200a3_df91_44c9_b282_bab02608e538.slice. Apr 12 20:26:45.894968 kubelet[2530]: E0412 20:26:45.894903 2530 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 20:26:45.900557 kubelet[2530]: I0412 20:26:45.900500 2530 scope.go:117] "RemoveContainer" containerID="75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722" Apr 12 20:26:45.903457 env[1477]: time="2024-04-12T20:26:45.903380210Z" level=info msg="RemoveContainer for \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\"" Apr 12 20:26:45.908184 env[1477]: time="2024-04-12T20:26:45.908106837Z" level=info msg="RemoveContainer for \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\" returns successfully" Apr 12 20:26:45.908726 kubelet[2530]: I0412 20:26:45.908672 2530 scope.go:117] "RemoveContainer" containerID="8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820" Apr 12 20:26:45.911456 env[1477]: time="2024-04-12T20:26:45.911369097Z" level=info msg="RemoveContainer for \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\"" Apr 12 20:26:45.917439 env[1477]: time="2024-04-12T20:26:45.916740342Z" level=info msg="RemoveContainer for \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\" returns successfully" Apr 12 20:26:45.917906 kubelet[2530]: I0412 20:26:45.917859 2530 scope.go:117] "RemoveContainer" containerID="02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538" Apr 12 20:26:45.920620 env[1477]: time="2024-04-12T20:26:45.920489664Z" level=info msg="RemoveContainer for \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\"" Apr 12 20:26:45.924924 env[1477]: time="2024-04-12T20:26:45.924819765Z" level=info msg="RemoveContainer for \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\" returns successfully" Apr 12 20:26:45.925345 kubelet[2530]: I0412 20:26:45.925256 2530 scope.go:117] "RemoveContainer" containerID="16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b" Apr 12 20:26:45.927992 env[1477]: time="2024-04-12T20:26:45.927908770Z" level=info msg="RemoveContainer for \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\"" Apr 12 20:26:45.932128 env[1477]: time="2024-04-12T20:26:45.932026978Z" level=info msg="RemoveContainer for \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\" returns successfully" Apr 12 20:26:45.932538 kubelet[2530]: I0412 20:26:45.932451 2530 scope.go:117] "RemoveContainer" containerID="2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65" Apr 12 20:26:45.935505 env[1477]: time="2024-04-12T20:26:45.935431628Z" level=info msg="RemoveContainer for \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\"" Apr 12 20:26:45.939900 env[1477]: time="2024-04-12T20:26:45.939834827Z" level=info msg="RemoveContainer for \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\" returns successfully" Apr 12 20:26:45.940283 kubelet[2530]: I0412 20:26:45.940221 2530 scope.go:117] "RemoveContainer" containerID="75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722" Apr 12 20:26:45.940958 env[1477]: time="2024-04-12T20:26:45.940771138Z" level=error msg="ContainerStatus for \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\": not found" Apr 12 20:26:45.941397 kubelet[2530]: E0412 20:26:45.941321 2530 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\": not found" containerID="75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722" Apr 12 20:26:45.941621 kubelet[2530]: I0412 20:26:45.941509 2530 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722"} err="failed to get container status \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\": rpc error: code = NotFound desc = an error occurred when try to find container \"75a9a43b1963ec2e7a1bbcdef289d04d0ef042fd029c88c44d55319c60d5f722\": not found" Apr 12 20:26:45.941621 kubelet[2530]: I0412 20:26:45.941548 2530 scope.go:117] "RemoveContainer" containerID="8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820" Apr 12 20:26:45.942262 env[1477]: time="2024-04-12T20:26:45.942072117Z" level=error msg="ContainerStatus for \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\": not found" Apr 12 20:26:45.942581 kubelet[2530]: E0412 20:26:45.942506 2530 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\": not found" containerID="8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820" Apr 12 20:26:45.942815 kubelet[2530]: I0412 20:26:45.942586 2530 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820"} err="failed to get container status \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d2a43bb3da0efa339ae445ae06c27691201308fa2ddf959d257ca288b1b7820\": not found" Apr 12 20:26:45.942815 kubelet[2530]: I0412 20:26:45.942630 2530 scope.go:117] "RemoveContainer" containerID="02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538" Apr 12 20:26:45.943226 env[1477]: time="2024-04-12T20:26:45.943096484Z" level=error msg="ContainerStatus for \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\": not found" Apr 12 20:26:45.943602 kubelet[2530]: E0412 20:26:45.943557 2530 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\": not found" containerID="02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538" Apr 12 20:26:45.943831 kubelet[2530]: I0412 20:26:45.943634 2530 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538"} err="failed to get container status \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\": rpc error: code = NotFound desc = an error occurred when try to find container \"02a0fcfa1276c541dfa00762264f332c04956c04727e2eea9dd5023eee910538\": not found" Apr 12 20:26:45.943831 kubelet[2530]: I0412 20:26:45.943665 2530 scope.go:117] "RemoveContainer" containerID="16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b" Apr 12 20:26:45.944264 env[1477]: time="2024-04-12T20:26:45.944086753Z" level=error msg="ContainerStatus for \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\": not found" Apr 12 20:26:45.944626 kubelet[2530]: E0412 20:26:45.944534 2530 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\": not found" containerID="16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b" Apr 12 20:26:45.944849 kubelet[2530]: I0412 20:26:45.944636 2530 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b"} err="failed to get container status \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\": rpc error: code = NotFound desc = an error occurred when try to find container \"16736bc9686d854f62a96085b857b5812b4a1989c0ffa3fb5270f447f7ed555b\": not found" Apr 12 20:26:45.944849 kubelet[2530]: I0412 20:26:45.944683 2530 scope.go:117] "RemoveContainer" containerID="2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65" Apr 12 20:26:45.945526 env[1477]: time="2024-04-12T20:26:45.945325001Z" level=error msg="ContainerStatus for \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\": not found" Apr 12 20:26:45.945958 kubelet[2530]: E0412 20:26:45.945906 2530 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\": not found" containerID="2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65" Apr 12 20:26:45.946297 kubelet[2530]: I0412 20:26:45.946026 2530 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65"} err="failed to get container status \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ea4b18302de42d4626e025763d5849fb13514d0c7bdcf0b2482598bcf7eaa65\": not found" Apr 12 20:26:45.946297 kubelet[2530]: I0412 20:26:45.946079 2530 scope.go:117] "RemoveContainer" containerID="f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260" Apr 12 20:26:45.949012 env[1477]: time="2024-04-12T20:26:45.948923561Z" level=info msg="RemoveContainer for \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\"" Apr 12 20:26:45.953338 env[1477]: time="2024-04-12T20:26:45.953208669Z" level=info msg="RemoveContainer for \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\" returns successfully" Apr 12 20:26:45.953683 kubelet[2530]: I0412 20:26:45.953602 2530 scope.go:117] "RemoveContainer" containerID="f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260" Apr 12 20:26:45.954283 env[1477]: time="2024-04-12T20:26:45.954101766Z" level=error msg="ContainerStatus for \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\": not found" Apr 12 20:26:45.954653 kubelet[2530]: E0412 20:26:45.954560 2530 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\": not found" containerID="f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260" Apr 12 20:26:45.954653 kubelet[2530]: I0412 20:26:45.954644 2530 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260"} err="failed to get container status \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\": rpc error: code = NotFound desc = an error occurred when try to find container \"f234c0dd7f6d81c28d170e18f96525a06968affcec93a3f029119f13ae854260\": not found" Apr 12 20:26:45.993273 systemd[1]: var-lib-kubelet-pods-600200a3\x2ddf91\x2d44c9\x2db282\x2dbab02608e538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4hxt.mount: Deactivated successfully. Apr 12 20:26:45.993559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc-rootfs.mount: Deactivated successfully. Apr 12 20:26:45.993752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33be1bcd9138439d4284c1a37813a0360d3e76ead8fcf1e793795ac8ff578ddc-shm.mount: Deactivated successfully. Apr 12 20:26:45.993949 systemd[1]: var-lib-kubelet-pods-b59a9e6f\x2d8fe4\x2d4acc\x2d9eeb\x2df862fd59b1b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsln65.mount: Deactivated successfully. Apr 12 20:26:45.994151 systemd[1]: var-lib-kubelet-pods-b59a9e6f\x2d8fe4\x2d4acc\x2d9eeb\x2df862fd59b1b8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 20:26:45.994345 systemd[1]: var-lib-kubelet-pods-b59a9e6f\x2d8fe4\x2d4acc\x2d9eeb\x2df862fd59b1b8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 20:26:46.914517 sshd[4541]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:46.922720 systemd[1]: sshd@20-139.178.89.23:22-147.75.109.163:47136.service: Deactivated successfully. Apr 12 20:26:46.924484 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 20:26:46.926293 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit. Apr 12 20:26:46.929258 systemd[1]: Started sshd@21-139.178.89.23:22-147.75.109.163:47150.service. Apr 12 20:26:46.931918 systemd-logind[1465]: Removed session 23. Apr 12 20:26:46.960714 sshd[4716]: Accepted publickey for core from 147.75.109.163 port 47150 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:46.961362 sshd[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:46.963736 systemd-logind[1465]: New session 24 of user core. Apr 12 20:26:46.964176 systemd[1]: Started session-24.scope. Apr 12 20:26:47.255784 sshd[4716]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:47.257721 systemd[1]: sshd@21-139.178.89.23:22-147.75.109.163:47150.service: Deactivated successfully. Apr 12 20:26:47.258114 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 20:26:47.258461 systemd-logind[1465]: Session 24 logged out. Waiting for processes to exit. Apr 12 20:26:47.259380 systemd[1]: Started sshd@22-139.178.89.23:22-147.75.109.163:41198.service. Apr 12 20:26:47.259870 systemd-logind[1465]: Removed session 24. Apr 12 20:26:47.262010 kubelet[2530]: I0412 20:26:47.261993 2530 topology_manager.go:215] "Topology Admit Handler" podUID="3ab18db6-8b99-4160-a992-0000d5dd0142" podNamespace="kube-system" podName="cilium-6rtgd" Apr 12 20:26:47.262223 kubelet[2530]: E0412 20:26:47.262028 2530 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" containerName="mount-cgroup" Apr 12 20:26:47.262223 kubelet[2530]: E0412 20:26:47.262035 2530 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" containerName="cilium-agent" Apr 12 20:26:47.262223 kubelet[2530]: E0412 20:26:47.262040 2530 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" containerName="apply-sysctl-overwrites" Apr 12 20:26:47.262223 kubelet[2530]: E0412 20:26:47.262044 2530 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="600200a3-df91-44c9-b282-bab02608e538" containerName="cilium-operator" Apr 12 20:26:47.262223 kubelet[2530]: E0412 20:26:47.262048 2530 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" containerName="mount-bpf-fs" Apr 12 20:26:47.262223 kubelet[2530]: E0412 20:26:47.262052 2530 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" containerName="clean-cilium-state" Apr 12 20:26:47.262223 kubelet[2530]: I0412 20:26:47.262066 2530 memory_manager.go:346] "RemoveStaleState removing state" podUID="600200a3-df91-44c9-b282-bab02608e538" containerName="cilium-operator" Apr 12 20:26:47.262223 kubelet[2530]: I0412 20:26:47.262070 2530 memory_manager.go:346] "RemoveStaleState removing state" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" containerName="cilium-agent" Apr 12 20:26:47.267367 systemd[1]: Created slice kubepods-burstable-pod3ab18db6_8b99_4160_a992_0000d5dd0142.slice. Apr 12 20:26:47.296627 sshd[4740]: Accepted publickey for core from 147.75.109.163 port 41198 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:47.297459 sshd[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:47.299901 systemd-logind[1465]: New session 25 of user core. Apr 12 20:26:47.300450 systemd[1]: Started session-25.scope. Apr 12 20:26:47.367764 kubelet[2530]: I0412 20:26:47.367706 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-etc-cni-netd\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.367764 kubelet[2530]: I0412 20:26:47.367769 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-cgroup\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.367938 kubelet[2530]: I0412 20:26:47.367796 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-config-path\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.367938 kubelet[2530]: I0412 20:26:47.367822 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-hubble-tls\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.367938 kubelet[2530]: I0412 20:26:47.367899 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-xtables-lock\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.367938 kubelet[2530]: I0412 20:26:47.367938 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-net\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368076 kubelet[2530]: I0412 20:26:47.367960 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5677\" (UniqueName: \"kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-kube-api-access-m5677\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368076 kubelet[2530]: I0412 20:26:47.368012 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-bpf-maps\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368076 kubelet[2530]: I0412 20:26:47.368049 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-clustermesh-secrets\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368076 kubelet[2530]: I0412 20:26:47.368073 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-run\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368204 kubelet[2530]: I0412 20:26:47.368093 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-lib-modules\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368204 kubelet[2530]: I0412 20:26:47.368142 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-kernel\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368290 kubelet[2530]: I0412 20:26:47.368220 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-ipsec-secrets\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368290 kubelet[2530]: I0412 20:26:47.368269 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cni-path\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.368360 kubelet[2530]: I0412 20:26:47.368292 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-hostproc\") pod \"cilium-6rtgd\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " pod="kube-system/cilium-6rtgd" Apr 12 20:26:47.406291 sshd[4740]: pam_unix(sshd:session): session closed for user core Apr 12 20:26:47.408673 systemd[1]: sshd@22-139.178.89.23:22-147.75.109.163:41198.service: Deactivated successfully. Apr 12 20:26:47.409140 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 20:26:47.409602 systemd-logind[1465]: Session 25 logged out. Waiting for processes to exit. Apr 12 20:26:47.410579 systemd[1]: Started sshd@23-139.178.89.23:22-147.75.109.163:41210.service. Apr 12 20:26:47.411125 systemd-logind[1465]: Removed session 25. Apr 12 20:26:47.415004 kubelet[2530]: E0412 20:26:47.414985 2530 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-m5677 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-6rtgd" podUID="3ab18db6-8b99-4160-a992-0000d5dd0142" Apr 12 20:26:47.447700 sshd[4765]: Accepted publickey for core from 147.75.109.163 port 41210 ssh2: RSA SHA256:4q+sCjRc5WxlfFnFyvQvQr+/DeMoTFdRHOI9xHx9URg Apr 12 20:26:47.448683 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 20:26:47.451801 systemd-logind[1465]: New session 26 of user core. Apr 12 20:26:47.452621 systemd[1]: Started session-26.scope. Apr 12 20:26:47.762905 kubelet[2530]: I0412 20:26:47.762855 2530 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="600200a3-df91-44c9-b282-bab02608e538" path="/var/lib/kubelet/pods/600200a3-df91-44c9-b282-bab02608e538/volumes" Apr 12 20:26:47.764181 kubelet[2530]: I0412 20:26:47.764103 2530 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8" path="/var/lib/kubelet/pods/b59a9e6f-8fe4-4acc-9eeb-f862fd59b1b8/volumes" Apr 12 20:26:48.074161 kubelet[2530]: I0412 20:26:48.074041 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-run\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.074161 kubelet[2530]: I0412 20:26:48.074163 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-clustermesh-secrets\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.074647 kubelet[2530]: I0412 20:26:48.074187 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.074647 kubelet[2530]: I0412 20:26:48.074229 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-etc-cni-netd\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.074647 kubelet[2530]: I0412 20:26:48.074322 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.074647 kubelet[2530]: I0412 20:26:48.074407 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cni-path\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.074647 kubelet[2530]: I0412 20:26:48.074473 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-lib-modules\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.075176 kubelet[2530]: I0412 20:26:48.074502 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cni-path" (OuterVolumeSpecName: "cni-path") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.075176 kubelet[2530]: I0412 20:26:48.074547 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-ipsec-secrets\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.075176 kubelet[2530]: I0412 20:26:48.074561 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.075176 kubelet[2530]: I0412 20:26:48.074615 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-bpf-maps\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.075176 kubelet[2530]: I0412 20:26:48.074673 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-kernel\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.075801 kubelet[2530]: I0412 20:26:48.074682 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.075801 kubelet[2530]: I0412 20:26:48.074729 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-xtables-lock\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.075801 kubelet[2530]: I0412 20:26:48.074756 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.075801 kubelet[2530]: I0412 20:26:48.074793 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-hubble-tls\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.075801 kubelet[2530]: I0412 20:26:48.074824 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.076321 kubelet[2530]: I0412 20:26:48.074858 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5677\" (UniqueName: \"kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-kube-api-access-m5677\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.076321 kubelet[2530]: I0412 20:26:48.074917 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-cgroup\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.076321 kubelet[2530]: I0412 20:26:48.075009 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-config-path\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.076321 kubelet[2530]: I0412 20:26:48.075003 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.076321 kubelet[2530]: I0412 20:26:48.075106 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-net\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.076321 kubelet[2530]: I0412 20:26:48.075204 2530 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-hostproc\") pod \"3ab18db6-8b99-4160-a992-0000d5dd0142\" (UID: \"3ab18db6-8b99-4160-a992-0000d5dd0142\") " Apr 12 20:26:48.076963 kubelet[2530]: I0412 20:26:48.075192 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.076963 kubelet[2530]: I0412 20:26:48.075308 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-hostproc" (OuterVolumeSpecName: "hostproc") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 20:26:48.076963 kubelet[2530]: I0412 20:26:48.075418 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-run\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.076963 kubelet[2530]: I0412 20:26:48.075492 2530 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-etc-cni-netd\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.076963 kubelet[2530]: I0412 20:26:48.075554 2530 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-lib-modules\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.076963 kubelet[2530]: I0412 20:26:48.075616 2530 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cni-path\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.076963 kubelet[2530]: I0412 20:26:48.075674 2530 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-bpf-maps\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.077778 kubelet[2530]: I0412 20:26:48.075737 2530 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.077778 kubelet[2530]: I0412 20:26:48.075793 2530 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-xtables-lock\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.077778 kubelet[2530]: I0412 20:26:48.075848 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-cgroup\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.077778 kubelet[2530]: I0412 20:26:48.075911 2530 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-host-proc-sys-net\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.079740 kubelet[2530]: I0412 20:26:48.079682 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 20:26:48.080030 kubelet[2530]: I0412 20:26:48.080002 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 20:26:48.080074 kubelet[2530]: I0412 20:26:48.080053 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 20:26:48.080103 kubelet[2530]: I0412 20:26:48.080076 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 20:26:48.080220 kubelet[2530]: I0412 20:26:48.080185 2530 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-kube-api-access-m5677" (OuterVolumeSpecName: "kube-api-access-m5677") pod "3ab18db6-8b99-4160-a992-0000d5dd0142" (UID: "3ab18db6-8b99-4160-a992-0000d5dd0142"). InnerVolumeSpecName "kube-api-access-m5677". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 20:26:48.080838 systemd[1]: var-lib-kubelet-pods-3ab18db6\x2d8b99\x2d4160\x2da992\x2d0000d5dd0142-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm5677.mount: Deactivated successfully. Apr 12 20:26:48.080893 systemd[1]: var-lib-kubelet-pods-3ab18db6\x2d8b99\x2d4160\x2da992\x2d0000d5dd0142-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 20:26:48.080929 systemd[1]: var-lib-kubelet-pods-3ab18db6\x2d8b99\x2d4160\x2da992\x2d0000d5dd0142-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 20:26:48.080961 systemd[1]: var-lib-kubelet-pods-3ab18db6\x2d8b99\x2d4160\x2da992\x2d0000d5dd0142-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 20:26:48.177039 kubelet[2530]: I0412 20:26:48.176933 2530 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-clustermesh-secrets\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.177039 kubelet[2530]: I0412 20:26:48.177007 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-ipsec-secrets\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.177039 kubelet[2530]: I0412 20:26:48.177045 2530 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-hubble-tls\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.177579 kubelet[2530]: I0412 20:26:48.177083 2530 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ab18db6-8b99-4160-a992-0000d5dd0142-cilium-config-path\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.177579 kubelet[2530]: I0412 20:26:48.177119 2530 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m5677\" (UniqueName: \"kubernetes.io/projected/3ab18db6-8b99-4160-a992-0000d5dd0142-kube-api-access-m5677\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.177579 kubelet[2530]: I0412 20:26:48.177152 2530 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ab18db6-8b99-4160-a992-0000d5dd0142-hostproc\") on node \"ci-3510.3.3-a-3fbc403199\" DevicePath \"\"" Apr 12 20:26:48.924758 systemd[1]: Removed slice kubepods-burstable-pod3ab18db6_8b99_4160_a992_0000d5dd0142.slice. Apr 12 20:26:48.946371 kubelet[2530]: I0412 20:26:48.946352 2530 topology_manager.go:215] "Topology Admit Handler" podUID="e545ab75-7fcd-4129-b4e5-ed8141fd7eb9" podNamespace="kube-system" podName="cilium-8j9lj" Apr 12 20:26:48.949113 systemd[1]: Created slice kubepods-burstable-pode545ab75_7fcd_4129_b4e5_ed8141fd7eb9.slice. Apr 12 20:26:48.980511 update_engine[1467]: I0412 20:26:48.980458 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 20:26:48.980712 update_engine[1467]: I0412 20:26:48.980585 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 20:26:48.980712 update_engine[1467]: E0412 20:26:48.980638 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 20:26:48.980712 update_engine[1467]: I0412 20:26:48.980675 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 12 20:26:48.980712 update_engine[1467]: I0412 20:26:48.980679 1467 omaha_request_action.cc:621] Omaha request response: Apr 12 20:26:48.980786 update_engine[1467]: E0412 20:26:48.980718 1467 omaha_request_action.cc:640] Omaha request network transfer failed. Apr 12 20:26:48.980786 update_engine[1467]: I0412 20:26:48.980724 1467 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 12 20:26:48.980786 update_engine[1467]: I0412 20:26:48.980726 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 12 20:26:48.980786 update_engine[1467]: I0412 20:26:48.980728 1467 update_attempter.cc:306] Processing Done. Apr 12 20:26:48.980786 update_engine[1467]: E0412 20:26:48.980736 1467 update_attempter.cc:619] Update failed. Apr 12 20:26:48.980786 update_engine[1467]: I0412 20:26:48.980737 1467 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 12 20:26:48.980786 update_engine[1467]: I0412 20:26:48.980739 1467 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 12 20:26:48.980786 update_engine[1467]: I0412 20:26:48.980741 1467 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 12 20:26:48.980786 update_engine[1467]: I0412 20:26:48.980782 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980793 1467 omaha_request_action.cc:270] Posting an Omaha request to disabled Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980795 1467 omaha_request_action.cc:271] Request: Apr 12 20:26:48.980935 update_engine[1467]: Apr 12 20:26:48.980935 update_engine[1467]: Apr 12 20:26:48.980935 update_engine[1467]: Apr 12 20:26:48.980935 update_engine[1467]: Apr 12 20:26:48.980935 update_engine[1467]: Apr 12 20:26:48.980935 update_engine[1467]: Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980798 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980859 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 20:26:48.980935 update_engine[1467]: E0412 20:26:48.980890 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980916 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980918 1467 omaha_request_action.cc:621] Omaha request response: Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980921 1467 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980922 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980924 1467 update_attempter.cc:306] Processing Done. Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980926 1467 update_attempter.cc:310] Error event sent. Apr 12 20:26:48.980935 update_engine[1467]: I0412 20:26:48.980931 1467 update_check_scheduler.cc:74] Next update check in 45m17s Apr 12 20:26:48.981225 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 12 20:26:48.981225 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 12 20:26:49.084281 kubelet[2530]: I0412 20:26:49.084156 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-cilium-run\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.084548 kubelet[2530]: I0412 20:26:49.084372 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-hostproc\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.084548 kubelet[2530]: I0412 20:26:49.084476 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-host-proc-sys-net\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.084824 kubelet[2530]: I0412 20:26:49.084589 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-host-proc-sys-kernel\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.084824 kubelet[2530]: I0412 20:26:49.084731 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dnk\" (UniqueName: \"kubernetes.io/projected/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-kube-api-access-z8dnk\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085037 kubelet[2530]: I0412 20:26:49.084830 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-etc-cni-netd\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085037 kubelet[2530]: I0412 20:26:49.084893 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-lib-modules\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085280 kubelet[2530]: I0412 20:26:49.085076 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-cilium-cgroup\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085280 kubelet[2530]: I0412 20:26:49.085167 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-clustermesh-secrets\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085280 kubelet[2530]: I0412 20:26:49.085255 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-cilium-ipsec-secrets\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085590 kubelet[2530]: I0412 20:26:49.085389 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-cni-path\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085590 kubelet[2530]: I0412 20:26:49.085464 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-xtables-lock\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085590 kubelet[2530]: I0412 20:26:49.085529 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-bpf-maps\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085895 kubelet[2530]: I0412 20:26:49.085597 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-cilium-config-path\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.085895 kubelet[2530]: I0412 20:26:49.085693 2530 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e545ab75-7fcd-4129-b4e5-ed8141fd7eb9-hubble-tls\") pod \"cilium-8j9lj\" (UID: \"e545ab75-7fcd-4129-b4e5-ed8141fd7eb9\") " pod="kube-system/cilium-8j9lj" Apr 12 20:26:49.251534 env[1477]: time="2024-04-12T20:26:49.251403395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8j9lj,Uid:e545ab75-7fcd-4129-b4e5-ed8141fd7eb9,Namespace:kube-system,Attempt:0,}" Apr 12 20:26:49.264147 env[1477]: time="2024-04-12T20:26:49.264047678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 20:26:49.264147 env[1477]: time="2024-04-12T20:26:49.264119726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 20:26:49.264419 env[1477]: time="2024-04-12T20:26:49.264145150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 20:26:49.264580 env[1477]: time="2024-04-12T20:26:49.264487942Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d pid=4807 runtime=io.containerd.runc.v2 Apr 12 20:26:49.295485 systemd[1]: Started cri-containerd-2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d.scope. Apr 12 20:26:49.342997 env[1477]: time="2024-04-12T20:26:49.342891557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8j9lj,Uid:e545ab75-7fcd-4129-b4e5-ed8141fd7eb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\"" Apr 12 20:26:49.349400 env[1477]: time="2024-04-12T20:26:49.349319102Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 20:26:49.364467 env[1477]: time="2024-04-12T20:26:49.364378534Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0179bf78f261967f96e66d43519233a363dd07258c9bda3091c3e65e25b1ca6f\"" Apr 12 20:26:49.365244 env[1477]: time="2024-04-12T20:26:49.365151958Z" level=info msg="StartContainer for \"0179bf78f261967f96e66d43519233a363dd07258c9bda3091c3e65e25b1ca6f\"" Apr 12 20:26:49.400221 systemd[1]: Started cri-containerd-0179bf78f261967f96e66d43519233a363dd07258c9bda3091c3e65e25b1ca6f.scope. Apr 12 20:26:49.442345 env[1477]: time="2024-04-12T20:26:49.442304865Z" level=info msg="StartContainer for \"0179bf78f261967f96e66d43519233a363dd07258c9bda3091c3e65e25b1ca6f\" returns successfully" Apr 12 20:26:49.451935 systemd[1]: cri-containerd-0179bf78f261967f96e66d43519233a363dd07258c9bda3091c3e65e25b1ca6f.scope: Deactivated successfully. Apr 12 20:26:49.476096 env[1477]: time="2024-04-12T20:26:49.476048379Z" level=info msg="shim disconnected" id=0179bf78f261967f96e66d43519233a363dd07258c9bda3091c3e65e25b1ca6f Apr 12 20:26:49.476289 env[1477]: time="2024-04-12T20:26:49.476097407Z" level=warning msg="cleaning up after shim disconnected" id=0179bf78f261967f96e66d43519233a363dd07258c9bda3091c3e65e25b1ca6f namespace=k8s.io Apr 12 20:26:49.476289 env[1477]: time="2024-04-12T20:26:49.476111984Z" level=info msg="cleaning up dead shim" Apr 12 20:26:49.483210 env[1477]: time="2024-04-12T20:26:49.483152929Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4889 runtime=io.containerd.runc.v2\n" Apr 12 20:26:49.762464 kubelet[2530]: I0412 20:26:49.762362 2530 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3ab18db6-8b99-4160-a992-0000d5dd0142" path="/var/lib/kubelet/pods/3ab18db6-8b99-4160-a992-0000d5dd0142/volumes" Apr 12 20:26:49.932280 env[1477]: time="2024-04-12T20:26:49.932253774Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 20:26:49.936417 env[1477]: time="2024-04-12T20:26:49.936393098Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c152d703963928bb1246aaebf8209438ce0943ef978587408f19e57cbdb82745\"" Apr 12 20:26:49.936678 env[1477]: time="2024-04-12T20:26:49.936664362Z" level=info msg="StartContainer for \"c152d703963928bb1246aaebf8209438ce0943ef978587408f19e57cbdb82745\"" Apr 12 20:26:49.943739 systemd[1]: Started cri-containerd-c152d703963928bb1246aaebf8209438ce0943ef978587408f19e57cbdb82745.scope. Apr 12 20:26:49.955875 env[1477]: time="2024-04-12T20:26:49.955851624Z" level=info msg="StartContainer for \"c152d703963928bb1246aaebf8209438ce0943ef978587408f19e57cbdb82745\" returns successfully" Apr 12 20:26:49.959368 systemd[1]: cri-containerd-c152d703963928bb1246aaebf8209438ce0943ef978587408f19e57cbdb82745.scope: Deactivated successfully. Apr 12 20:26:49.992786 env[1477]: time="2024-04-12T20:26:49.992752048Z" level=info msg="shim disconnected" id=c152d703963928bb1246aaebf8209438ce0943ef978587408f19e57cbdb82745 Apr 12 20:26:49.992786 env[1477]: time="2024-04-12T20:26:49.992783913Z" level=warning msg="cleaning up after shim disconnected" id=c152d703963928bb1246aaebf8209438ce0943ef978587408f19e57cbdb82745 namespace=k8s.io Apr 12 20:26:49.992929 env[1477]: time="2024-04-12T20:26:49.992791873Z" level=info msg="cleaning up dead shim" Apr 12 20:26:49.997529 env[1477]: time="2024-04-12T20:26:49.997501449Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4950 runtime=io.containerd.runc.v2\n" Apr 12 20:26:50.896200 kubelet[2530]: E0412 20:26:50.896103 2530 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 20:26:50.938105 env[1477]: time="2024-04-12T20:26:50.937971212Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 20:26:50.949845 env[1477]: time="2024-04-12T20:26:50.949798965Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2\"" Apr 12 20:26:50.950138 env[1477]: time="2024-04-12T20:26:50.950123416Z" level=info msg="StartContainer for \"6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2\"" Apr 12 20:26:50.959033 systemd[1]: Started cri-containerd-6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2.scope. Apr 12 20:26:50.971411 env[1477]: time="2024-04-12T20:26:50.971383389Z" level=info msg="StartContainer for \"6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2\" returns successfully" Apr 12 20:26:50.972926 systemd[1]: cri-containerd-6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2.scope: Deactivated successfully. Apr 12 20:26:50.997682 env[1477]: time="2024-04-12T20:26:50.997576132Z" level=info msg="shim disconnected" id=6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2 Apr 12 20:26:50.998084 env[1477]: time="2024-04-12T20:26:50.997685580Z" level=warning msg="cleaning up after shim disconnected" id=6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2 namespace=k8s.io Apr 12 20:26:50.998084 env[1477]: time="2024-04-12T20:26:50.997720572Z" level=info msg="cleaning up dead shim" Apr 12 20:26:51.014650 env[1477]: time="2024-04-12T20:26:51.014571644Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5005 runtime=io.containerd.runc.v2\n" Apr 12 20:26:51.197731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fa4b0077b04547667b15cb9d2afac87780ef5ebe4bded04db04d080fbbf1ef2-rootfs.mount: Deactivated successfully. Apr 12 20:26:51.941391 env[1477]: time="2024-04-12T20:26:51.941330267Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 20:26:51.945943 env[1477]: time="2024-04-12T20:26:51.945919945Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f\"" Apr 12 20:26:51.946252 env[1477]: time="2024-04-12T20:26:51.946226752Z" level=info msg="StartContainer for \"3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f\"" Apr 12 20:26:51.946704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332234568.mount: Deactivated successfully. Apr 12 20:26:51.954396 systemd[1]: Started cri-containerd-3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f.scope. Apr 12 20:26:51.965941 env[1477]: time="2024-04-12T20:26:51.965915698Z" level=info msg="StartContainer for \"3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f\" returns successfully" Apr 12 20:26:51.966095 systemd[1]: cri-containerd-3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f.scope: Deactivated successfully. Apr 12 20:26:51.975361 env[1477]: time="2024-04-12T20:26:51.975335496Z" level=info msg="shim disconnected" id=3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f Apr 12 20:26:51.975457 env[1477]: time="2024-04-12T20:26:51.975362542Z" level=warning msg="cleaning up after shim disconnected" id=3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f namespace=k8s.io Apr 12 20:26:51.975457 env[1477]: time="2024-04-12T20:26:51.975368828Z" level=info msg="cleaning up dead shim" Apr 12 20:26:51.978845 env[1477]: time="2024-04-12T20:26:51.978826916Z" level=warning msg="cleanup warnings time=\"2024-04-12T20:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5059 runtime=io.containerd.runc.v2\n" Apr 12 20:26:52.198296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ba70ed60fc0d22085c34418b1b34f3d9d8b64869df2f88d6aba8541bff6ec0f-rootfs.mount: Deactivated successfully. Apr 12 20:26:52.950507 env[1477]: time="2024-04-12T20:26:52.950483945Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 20:26:52.956506 env[1477]: time="2024-04-12T20:26:52.956400256Z" level=info msg="CreateContainer within sandbox \"2d0a805cdada972f2a06e551053ec3311a314fbc3820ec88d02990e4e333f38d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d117e8d5d5500eb888e4cc149ea8aff54da0b3ebe8f1fb6420ddf981f10a6a2\"" Apr 12 20:26:52.956861 env[1477]: time="2024-04-12T20:26:52.956814234Z" level=info msg="StartContainer for \"9d117e8d5d5500eb888e4cc149ea8aff54da0b3ebe8f1fb6420ddf981f10a6a2\"" Apr 12 20:26:52.967093 systemd[1]: Started cri-containerd-9d117e8d5d5500eb888e4cc149ea8aff54da0b3ebe8f1fb6420ddf981f10a6a2.scope. Apr 12 20:26:52.980300 env[1477]: time="2024-04-12T20:26:52.980275798Z" level=info msg="StartContainer for \"9d117e8d5d5500eb888e4cc149ea8aff54da0b3ebe8f1fb6420ddf981f10a6a2\" returns successfully" Apr 12 20:26:53.049696 kubelet[2530]: I0412 20:26:53.049682 2530 setters.go:552] "Node became not ready" node="ci-3510.3.3-a-3fbc403199" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-12T20:26:53Z","lastTransitionTime":"2024-04-12T20:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 12 20:26:53.136311 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 20:26:53.976328 kubelet[2530]: I0412 20:26:53.976307 2530 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8j9lj" podStartSLOduration=5.976284729 podCreationTimestamp="2024-04-12 20:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 20:26:53.975931581 +0000 UTC m=+388.313221446" watchObservedRunningTime="2024-04-12 20:26:53.976284729 +0000 UTC m=+388.313574592" Apr 12 20:26:55.909083 systemd-networkd[1307]: lxc_health: Link UP Apr 12 20:26:55.932915 systemd-networkd[1307]: lxc_health: Gained carrier Apr 12 20:26:55.933237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 20:26:57.510368 systemd-networkd[1307]: lxc_health: Gained IPv6LL Apr 12 20:27:02.099909 sshd[4765]: pam_unix(sshd:session): session closed for user core Apr 12 20:27:02.101445 systemd[1]: sshd@23-139.178.89.23:22-147.75.109.163:41210.service: Deactivated successfully. Apr 12 20:27:02.101928 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 20:27:02.102286 systemd-logind[1465]: Session 26 logged out. Waiting for processes to exit. Apr 12 20:27:02.102744 systemd-logind[1465]: Removed session 26.