Feb 9 19:14:39.943780 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 19:14:39.945919 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 19:14:39.945947 kernel: efi: EFI v2.70 by EDK II Feb 9 19:14:39.945963 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 19:14:39.945976 kernel: ACPI: Early table checksum verification disabled Feb 9 19:14:39.945990 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 19:14:39.946006 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 19:14:39.946020 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:14:39.946034 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 19:14:39.946047 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:14:39.946066 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 19:14:39.946080 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 19:14:39.946093 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 19:14:39.946107 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:14:39.946123 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 19:14:39.946143 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 19:14:39.946157 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 19:14:39.946172 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 19:14:39.946186 kernel: printk: bootconsole [uart0] enabled Feb 9 19:14:39.946201 kernel: NUMA: Failed to initialise from firmware Feb 9 19:14:39.946215 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:39.946230 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 19:14:39.946244 kernel: Zone ranges: Feb 9 19:14:39.946259 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 19:14:39.946273 kernel: DMA32 empty Feb 9 19:14:39.946287 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 19:14:39.946305 kernel: Movable zone start for each node Feb 9 19:14:39.946320 kernel: Early memory node ranges Feb 9 19:14:39.946334 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 19:14:39.946349 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 19:14:39.946363 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 19:14:39.946378 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 19:14:39.946393 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 19:14:39.946407 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 19:14:39.946422 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:39.946436 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 19:14:39.946451 kernel: psci: probing for conduit method from ACPI. Feb 9 19:14:39.946465 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 19:14:39.946484 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 19:14:39.946498 kernel: psci: Trusted OS migration not required Feb 9 19:14:39.946520 kernel: psci: SMC Calling Convention v1.1 Feb 9 19:14:39.946536 kernel: ACPI: SRAT not present Feb 9 19:14:39.946551 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 19:14:39.946571 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 19:14:39.946587 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 19:14:39.946602 kernel: Detected PIPT I-cache on CPU0 Feb 9 19:14:39.946617 kernel: CPU features: detected: GIC system register CPU interface Feb 9 19:14:39.946632 kernel: CPU features: detected: Spectre-v2 Feb 9 19:14:39.946647 kernel: CPU features: detected: Spectre-v3a Feb 9 19:14:39.946662 kernel: CPU features: detected: Spectre-BHB Feb 9 19:14:39.946677 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 19:14:39.946692 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 19:14:39.946707 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 19:14:39.946722 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 19:14:39.946742 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 19:14:39.946757 kernel: Policy zone: Normal Feb 9 19:14:39.946776 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:39.946831 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:14:39.946849 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:14:39.946865 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:14:39.946881 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:14:39.946896 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 19:14:39.946912 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 19:14:39.946928 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:14:39.946949 kernel: trace event string verifier disabled Feb 9 19:14:39.946964 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 19:14:39.946980 kernel: rcu: RCU event tracing is enabled. Feb 9 19:14:39.946996 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:14:39.947012 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 19:14:39.947027 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:14:39.947043 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:14:39.947058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:14:39.947073 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 19:14:39.947089 kernel: GICv3: 96 SPIs implemented Feb 9 19:14:39.947103 kernel: GICv3: 0 Extended SPIs implemented Feb 9 19:14:39.947119 kernel: GICv3: Distributor has no Range Selector support Feb 9 19:14:39.947138 kernel: Root IRQ handler: gic_handle_irq Feb 9 19:14:39.947153 kernel: GICv3: 16 PPIs implemented Feb 9 19:14:39.947168 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 19:14:39.947183 kernel: ACPI: SRAT not present Feb 9 19:14:39.947197 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 19:14:39.947213 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 19:14:39.947228 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 19:14:39.947244 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 19:14:39.947259 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 19:14:39.947274 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 19:14:39.947289 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 19:14:39.947308 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 19:14:39.947324 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 19:14:39.947339 kernel: Console: colour dummy device 80x25 Feb 9 19:14:39.947355 kernel: printk: console [tty1] enabled Feb 9 19:14:39.947372 kernel: ACPI: Core revision 20210730 Feb 9 19:14:39.947388 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 19:14:39.947403 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:14:39.947419 kernel: LSM: Security Framework initializing Feb 9 19:14:39.947434 kernel: SELinux: Initializing. Feb 9 19:14:39.947450 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:39.947489 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:39.947508 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:14:39.947524 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 19:14:39.947540 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 19:14:39.947555 kernel: Remapping and enabling EFI services. Feb 9 19:14:39.947570 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:14:39.947586 kernel: Detected PIPT I-cache on CPU1 Feb 9 19:14:39.947601 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 19:14:39.947617 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 19:14:39.947638 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 19:14:39.947653 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:14:39.947669 kernel: SMP: Total of 2 processors activated. Feb 9 19:14:39.947684 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 19:14:39.947700 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 19:14:39.947715 kernel: CPU features: detected: CRC32 instructions Feb 9 19:14:39.947730 kernel: CPU: All CPU(s) started at EL1 Feb 9 19:14:39.947746 kernel: alternatives: patching kernel code Feb 9 19:14:39.947761 kernel: devtmpfs: initialized Feb 9 19:14:39.947780 kernel: KASLR disabled due to lack of seed Feb 9 19:14:39.947818 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:14:39.947836 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:14:39.947864 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:14:39.947884 kernel: SMBIOS 3.0.0 present. Feb 9 19:14:39.947900 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 19:14:39.947917 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:14:39.947933 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 19:14:39.947949 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 19:14:39.947966 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 19:14:39.947982 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:14:39.947998 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Feb 9 19:14:39.948019 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:14:39.948035 kernel: cpuidle: using governor menu Feb 9 19:14:39.948051 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 19:14:39.948067 kernel: ASID allocator initialised with 32768 entries Feb 9 19:14:39.948083 kernel: ACPI: bus type PCI registered Feb 9 19:14:39.948104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:14:39.948120 kernel: Serial: AMBA PL011 UART driver Feb 9 19:14:39.948136 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:14:39.948152 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 19:14:39.948169 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:14:39.948185 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 19:14:39.948201 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:14:39.948217 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 19:14:39.948233 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:14:39.948253 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:14:39.948269 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:14:39.948285 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:14:39.948301 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:14:39.948317 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:14:39.948333 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:14:39.948349 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:14:39.948365 kernel: ACPI: Interpreter enabled Feb 9 19:14:39.948382 kernel: ACPI: Using GIC for interrupt routing Feb 9 19:14:39.948402 kernel: ACPI: MCFG table detected, 1 entries Feb 9 19:14:39.948418 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 19:14:39.948712 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:14:39.948941 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 19:14:39.949140 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 19:14:39.949335 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 19:14:39.949529 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 19:14:39.954867 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 19:14:39.954895 kernel: acpiphp: Slot [1] registered Feb 9 19:14:39.954913 kernel: acpiphp: Slot [2] registered Feb 9 19:14:39.954930 kernel: acpiphp: Slot [3] registered Feb 9 19:14:39.954947 kernel: acpiphp: Slot [4] registered Feb 9 19:14:39.954963 kernel: acpiphp: Slot [5] registered Feb 9 19:14:39.954980 kernel: acpiphp: Slot [6] registered Feb 9 19:14:39.954996 kernel: acpiphp: Slot [7] registered Feb 9 19:14:39.955013 kernel: acpiphp: Slot [8] registered Feb 9 19:14:39.955041 kernel: acpiphp: Slot [9] registered Feb 9 19:14:39.955058 kernel: acpiphp: Slot [10] registered Feb 9 19:14:39.955075 kernel: acpiphp: Slot [11] registered Feb 9 19:14:39.955092 kernel: acpiphp: Slot [12] registered Feb 9 19:14:39.955108 kernel: acpiphp: Slot [13] registered Feb 9 19:14:39.955126 kernel: acpiphp: Slot [14] registered Feb 9 19:14:39.955143 kernel: acpiphp: Slot [15] registered Feb 9 19:14:39.955160 kernel: acpiphp: Slot [16] registered Feb 9 19:14:39.955176 kernel: acpiphp: Slot [17] registered Feb 9 19:14:39.955193 kernel: acpiphp: Slot [18] registered Feb 9 19:14:39.955215 kernel: acpiphp: Slot [19] registered Feb 9 19:14:39.955231 kernel: acpiphp: Slot [20] registered Feb 9 19:14:39.955247 kernel: acpiphp: Slot [21] registered Feb 9 19:14:39.955264 kernel: acpiphp: Slot [22] registered Feb 9 19:14:39.955281 kernel: acpiphp: Slot [23] registered Feb 9 19:14:39.955298 kernel: acpiphp: Slot [24] registered Feb 9 19:14:39.955315 kernel: acpiphp: Slot [25] registered Feb 9 19:14:39.955332 kernel: acpiphp: Slot [26] registered Feb 9 19:14:39.955348 kernel: acpiphp: Slot [27] registered Feb 9 19:14:39.955369 kernel: acpiphp: Slot [28] registered Feb 9 19:14:39.955385 kernel: acpiphp: Slot [29] registered Feb 9 19:14:39.955401 kernel: acpiphp: Slot [30] registered Feb 9 19:14:39.955416 kernel: acpiphp: Slot [31] registered Feb 9 19:14:39.955432 kernel: PCI host bridge to bus 0000:00 Feb 9 19:14:39.955742 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 19:14:39.955968 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 19:14:39.956154 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:39.956343 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 19:14:39.956575 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 19:14:39.956826 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 19:14:39.957047 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 19:14:39.957270 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:14:39.957480 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 19:14:39.957694 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:39.957948 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:14:39.958158 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 19:14:39.958371 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 19:14:39.958578 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 19:14:39.958783 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:39.970100 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 19:14:39.970322 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 19:14:39.970531 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 19:14:39.970735 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 19:14:39.971005 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 19:14:39.971199 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 19:14:39.971384 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 19:14:39.971583 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:39.971613 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 19:14:39.971631 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 19:14:39.971649 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 19:14:39.971665 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 19:14:39.971682 kernel: iommu: Default domain type: Translated Feb 9 19:14:39.971698 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 19:14:39.971714 kernel: vgaarb: loaded Feb 9 19:14:39.971731 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:14:39.971747 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:14:39.971768 kernel: PTP clock support registered Feb 9 19:14:39.971799 kernel: Registered efivars operations Feb 9 19:14:39.971821 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 19:14:39.971838 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:14:39.971855 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:14:39.971871 kernel: pnp: PnP ACPI init Feb 9 19:14:39.972095 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 19:14:39.972120 kernel: pnp: PnP ACPI: found 1 devices Feb 9 19:14:39.972137 kernel: NET: Registered PF_INET protocol family Feb 9 19:14:39.972159 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:14:39.972175 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:14:39.972192 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:14:39.972208 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:14:39.972225 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:14:39.972241 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:14:39.972258 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:39.972274 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:39.972291 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:14:39.972311 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:14:39.972328 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 19:14:39.972344 kernel: kvm [1]: HYP mode not available Feb 9 19:14:39.972360 kernel: Initialise system trusted keyrings Feb 9 19:14:39.972377 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:14:39.972393 kernel: Key type asymmetric registered Feb 9 19:14:39.972410 kernel: Asymmetric key parser 'x509' registered Feb 9 19:14:39.972426 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:14:39.972442 kernel: io scheduler mq-deadline registered Feb 9 19:14:39.972463 kernel: io scheduler kyber registered Feb 9 19:14:39.972479 kernel: io scheduler bfq registered Feb 9 19:14:39.972697 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 19:14:39.972722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 19:14:39.972739 kernel: ACPI: button: Power Button [PWRB] Feb 9 19:14:39.972756 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:14:39.972773 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 19:14:39.974085 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 19:14:39.974123 kernel: printk: console [ttyS0] disabled Feb 9 19:14:39.974141 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 19:14:39.974158 kernel: printk: console [ttyS0] enabled Feb 9 19:14:39.974175 kernel: printk: bootconsole [uart0] disabled Feb 9 19:14:39.974191 kernel: thunder_xcv, ver 1.0 Feb 9 19:14:39.974207 kernel: thunder_bgx, ver 1.0 Feb 9 19:14:39.974223 kernel: nicpf, ver 1.0 Feb 9 19:14:39.974239 kernel: nicvf, ver 1.0 Feb 9 19:14:39.974445 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 19:14:39.974638 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T19:14:39 UTC (1707506079) Feb 9 19:14:39.974662 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:14:39.974678 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:14:39.974695 kernel: Segment Routing with IPv6 Feb 9 19:14:39.974711 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:14:39.974727 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:14:39.974743 kernel: Key type dns_resolver registered Feb 9 19:14:39.974759 kernel: registered taskstats version 1 Feb 9 19:14:39.974781 kernel: Loading compiled-in X.509 certificates Feb 9 19:14:39.974818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 19:14:39.974836 kernel: Key type .fscrypt registered Feb 9 19:14:39.974852 kernel: Key type fscrypt-provisioning registered Feb 9 19:14:39.974868 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:14:39.974885 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:14:39.974901 kernel: ima: No architecture policies found Feb 9 19:14:39.974917 kernel: Freeing unused kernel memory: 34688K Feb 9 19:14:39.974933 kernel: Run /init as init process Feb 9 19:14:39.974954 kernel: with arguments: Feb 9 19:14:39.974971 kernel: /init Feb 9 19:14:39.974986 kernel: with environment: Feb 9 19:14:39.975002 kernel: HOME=/ Feb 9 19:14:39.975019 kernel: TERM=linux Feb 9 19:14:39.975034 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:14:39.975056 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:39.975077 systemd[1]: Detected virtualization amazon. Feb 9 19:14:39.975099 systemd[1]: Detected architecture arm64. Feb 9 19:14:39.975116 systemd[1]: Running in initrd. Feb 9 19:14:39.975134 systemd[1]: No hostname configured, using default hostname. Feb 9 19:14:39.975151 systemd[1]: Hostname set to . Feb 9 19:14:39.975169 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:39.975186 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:14:39.975204 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:39.975221 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:39.975242 systemd[1]: Reached target paths.target. Feb 9 19:14:39.975260 systemd[1]: Reached target slices.target. Feb 9 19:14:39.975277 systemd[1]: Reached target swap.target. Feb 9 19:14:39.975294 systemd[1]: Reached target timers.target. Feb 9 19:14:39.975312 systemd[1]: Listening on iscsid.socket. Feb 9 19:14:39.975330 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:14:39.975348 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:14:39.975365 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:14:39.975387 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:14:39.975404 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:39.975422 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:39.975439 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:39.975457 systemd[1]: Reached target sockets.target. Feb 9 19:14:39.975493 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:39.975512 systemd[1]: Finished network-cleanup.service. Feb 9 19:14:39.975529 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:14:39.975547 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:39.975570 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:39.975587 systemd[1]: Starting systemd-resolved.service... Feb 9 19:14:39.975605 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:14:39.975622 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:39.975640 kernel: audit: type=1130 audit(1707506079.961:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.975658 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:14:39.975679 systemd-journald[308]: Journal started Feb 9 19:14:39.975767 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2b6c7f868383a576c2aac397401933) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:39.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.944878 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 19:14:39.991951 kernel: audit: type=1130 audit(1707506079.975:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.991987 systemd[1]: Started systemd-journald.service. Feb 9 19:14:39.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.999821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:14:40.005065 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 19:14:40.008929 kernel: Bridge firewalling registered Feb 9 19:14:40.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.016954 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:14:40.024723 kernel: audit: type=1130 audit(1707506080.008:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.038823 kernel: audit: type=1130 audit(1707506080.022:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.039214 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:14:40.052193 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:14:40.060813 kernel: SCSI subsystem initialized Feb 9 19:14:40.084106 systemd-resolved[310]: Positive Trust Anchors: Feb 9 19:14:40.100716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:14:40.100752 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:14:40.100775 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:14:40.084132 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:14:40.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.084194 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:14:40.085236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:14:40.114238 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:14:40.144526 kernel: audit: type=1130 audit(1707506080.098:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.148134 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:14:40.157225 kernel: audit: type=1130 audit(1707506080.144:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.156269 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 19:14:40.159592 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:40.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.171859 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:40.182834 kernel: audit: type=1130 audit(1707506080.159:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.189867 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:40.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.201826 kernel: audit: type=1130 audit(1707506080.191:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.205102 dracut-cmdline[327]: dracut-dracut-053 Feb 9 19:14:40.210577 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:40.327820 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:14:40.340826 kernel: iscsi: registered transport (tcp) Feb 9 19:14:40.365239 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:14:40.365321 kernel: QLogic iSCSI HBA Driver Feb 9 19:14:40.553646 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 19:14:40.555559 kernel: random: crng init done Feb 9 19:14:40.557061 systemd[1]: Started systemd-resolved.service. Feb 9 19:14:40.560170 systemd[1]: Reached target nss-lookup.target. Feb 9 19:14:40.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.570836 kernel: audit: type=1130 audit(1707506080.558:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.585331 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:14:40.589904 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:14:40.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.654842 kernel: raid6: neonx8 gen() 6383 MB/s Feb 9 19:14:40.672819 kernel: raid6: neonx8 xor() 4743 MB/s Feb 9 19:14:40.690818 kernel: raid6: neonx4 gen() 6446 MB/s Feb 9 19:14:40.708819 kernel: raid6: neonx4 xor() 4907 MB/s Feb 9 19:14:40.726818 kernel: raid6: neonx2 gen() 5762 MB/s Feb 9 19:14:40.744818 kernel: raid6: neonx2 xor() 4510 MB/s Feb 9 19:14:40.762818 kernel: raid6: neonx1 gen() 4469 MB/s Feb 9 19:14:40.780817 kernel: raid6: neonx1 xor() 3685 MB/s Feb 9 19:14:40.798818 kernel: raid6: int64x8 gen() 3416 MB/s Feb 9 19:14:40.816817 kernel: raid6: int64x8 xor() 2099 MB/s Feb 9 19:14:40.834818 kernel: raid6: int64x4 gen() 3793 MB/s Feb 9 19:14:40.852818 kernel: raid6: int64x4 xor() 2202 MB/s Feb 9 19:14:40.870817 kernel: raid6: int64x2 gen() 3593 MB/s Feb 9 19:14:40.888818 kernel: raid6: int64x2 xor() 1917 MB/s Feb 9 19:14:40.906817 kernel: raid6: int64x1 gen() 2768 MB/s Feb 9 19:14:40.926286 kernel: raid6: int64x1 xor() 1455 MB/s Feb 9 19:14:40.926315 kernel: raid6: using algorithm neonx4 gen() 6446 MB/s Feb 9 19:14:40.926339 kernel: raid6: .... xor() 4907 MB/s, rmw enabled Feb 9 19:14:40.928113 kernel: raid6: using neon recovery algorithm Feb 9 19:14:40.946823 kernel: xor: measuring software checksum speed Feb 9 19:14:40.948818 kernel: 8regs : 9332 MB/sec Feb 9 19:14:40.951817 kernel: 32regs : 11112 MB/sec Feb 9 19:14:40.955666 kernel: arm64_neon : 9616 MB/sec Feb 9 19:14:40.955698 kernel: xor: using function: 32regs (11112 MB/sec) Feb 9 19:14:41.045843 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 19:14:41.062718 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:14:41.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.064000 audit: BPF prog-id=7 op=LOAD Feb 9 19:14:41.064000 audit: BPF prog-id=8 op=LOAD Feb 9 19:14:41.067284 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:41.093959 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 19:14:41.105275 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:41.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.108942 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:14:41.139609 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Feb 9 19:14:41.198438 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:14:41.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.202670 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:41.306071 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:41.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.421833 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 19:14:41.421898 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 19:14:41.434042 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:14:41.434341 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:14:41.444827 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a6:ca:c3:c7:79 Feb 9 19:14:41.450168 (udev-worker)[554]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:41.455269 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 19:14:41.457899 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:14:41.468821 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:14:41.476216 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:14:41.476270 kernel: GPT:9289727 != 16777215 Feb 9 19:14:41.476295 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:14:41.478344 kernel: GPT:9289727 != 16777215 Feb 9 19:14:41.479613 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:14:41.482940 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:41.555815 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (559) Feb 9 19:14:41.581850 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:14:41.634369 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:14:41.662021 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:14:41.665874 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:14:41.692103 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:14:41.714879 systemd[1]: Starting disk-uuid.service... Feb 9 19:14:41.726080 disk-uuid[666]: Primary Header is updated. Feb 9 19:14:41.726080 disk-uuid[666]: Secondary Entries is updated. Feb 9 19:14:41.726080 disk-uuid[666]: Secondary Header is updated. Feb 9 19:14:41.735878 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:41.745819 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:42.750840 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:42.750918 disk-uuid[667]: The operation has completed successfully. Feb 9 19:14:42.924153 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:14:42.926429 systemd[1]: Finished disk-uuid.service. Feb 9 19:14:42.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.949213 systemd[1]: Starting verity-setup.service... Feb 9 19:14:42.974829 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 19:14:43.057027 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:14:43.062077 systemd[1]: Finished verity-setup.service. Feb 9 19:14:43.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.066394 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:14:43.149833 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:14:43.151273 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:14:43.152497 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:14:43.155827 systemd[1]: Starting ignition-setup.service... Feb 9 19:14:43.163937 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:14:43.186925 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:43.186990 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:43.187014 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:43.197853 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:43.215626 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:14:43.249042 systemd[1]: Finished ignition-setup.service. Feb 9 19:14:43.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.253378 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:14:43.309907 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:14:43.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.312000 audit: BPF prog-id=9 op=LOAD Feb 9 19:14:43.315720 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:43.361131 systemd-networkd[1095]: lo: Link UP Feb 9 19:14:43.361154 systemd-networkd[1095]: lo: Gained carrier Feb 9 19:14:43.365380 systemd-networkd[1095]: Enumeration completed Feb 9 19:14:43.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.365874 systemd-networkd[1095]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:43.366097 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:43.368008 systemd[1]: Reached target network.target. Feb 9 19:14:43.378410 systemd[1]: Starting iscsiuio.service... Feb 9 19:14:43.383643 systemd-networkd[1095]: eth0: Link UP Feb 9 19:14:43.386039 systemd-networkd[1095]: eth0: Gained carrier Feb 9 19:14:43.391783 systemd[1]: Started iscsiuio.service. Feb 9 19:14:43.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.394746 systemd[1]: Starting iscsid.service... Feb 9 19:14:43.403403 iscsid[1100]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:43.403403 iscsid[1100]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:14:43.403403 iscsid[1100]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:14:43.403403 iscsid[1100]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:14:43.403403 iscsid[1100]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:43.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.424537 iscsid[1100]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:14:43.416574 systemd[1]: Started iscsid.service. Feb 9 19:14:43.429418 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:14:43.429687 systemd-networkd[1095]: eth0: DHCPv4 address 172.31.23.244/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:43.456994 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:14:43.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.457582 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:14:43.458510 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:43.459163 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:43.462404 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:14:43.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.484844 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:14:43.799179 ignition[1048]: Ignition 2.14.0 Feb 9 19:14:43.799205 ignition[1048]: Stage: fetch-offline Feb 9 19:14:43.799558 ignition[1048]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:43.799620 ignition[1048]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:43.816821 ignition[1048]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:43.819627 ignition[1048]: Ignition finished successfully Feb 9 19:14:43.822755 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:14:43.835749 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:14:43.835813 kernel: audit: type=1130 audit(1707506083.823:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.827233 systemd[1]: Starting ignition-fetch.service... Feb 9 19:14:43.845957 ignition[1119]: Ignition 2.14.0 Feb 9 19:14:43.845985 ignition[1119]: Stage: fetch Feb 9 19:14:43.846290 ignition[1119]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:43.846348 ignition[1119]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:43.861466 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:43.863766 ignition[1119]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:43.872654 ignition[1119]: INFO : PUT result: OK Feb 9 19:14:43.875999 ignition[1119]: DEBUG : parsed url from cmdline: "" Feb 9 19:14:43.875999 ignition[1119]: INFO : no config URL provided Feb 9 19:14:43.875999 ignition[1119]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:14:43.875999 ignition[1119]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:14:43.875999 ignition[1119]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:43.887123 ignition[1119]: INFO : PUT result: OK Feb 9 19:14:43.887123 ignition[1119]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:14:43.887123 ignition[1119]: INFO : GET result: OK Feb 9 19:14:43.887123 ignition[1119]: DEBUG : parsing config with SHA512: 10ae826eb6801d4c324d87332b5f5fde10a19b6634635ee213fb9f21a485a6aff7d2c8325d7ca4db2adca7c1fa7921cbd2fb882e25604256a5cbb7b81492bba3 Feb 9 19:14:43.957387 unknown[1119]: fetched base config from "system" Feb 9 19:14:43.957415 unknown[1119]: fetched base config from "system" Feb 9 19:14:43.957431 unknown[1119]: fetched user config from "aws" Feb 9 19:14:43.960989 ignition[1119]: fetch: fetch complete Feb 9 19:14:43.961013 ignition[1119]: fetch: fetch passed Feb 9 19:14:43.961121 ignition[1119]: Ignition finished successfully Feb 9 19:14:43.966280 systemd[1]: Finished ignition-fetch.service. Feb 9 19:14:43.982922 kernel: audit: type=1130 audit(1707506083.968:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.971755 systemd[1]: Starting ignition-kargs.service... Feb 9 19:14:43.996187 ignition[1125]: Ignition 2.14.0 Feb 9 19:14:43.996215 ignition[1125]: Stage: kargs Feb 9 19:14:43.996500 ignition[1125]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:43.996554 ignition[1125]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:44.012030 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:44.014565 ignition[1125]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:44.017707 ignition[1125]: INFO : PUT result: OK Feb 9 19:14:44.035285 ignition[1125]: kargs: kargs passed Feb 9 19:14:44.035392 ignition[1125]: Ignition finished successfully Feb 9 19:14:44.039099 systemd[1]: Finished ignition-kargs.service. Feb 9 19:14:44.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.046834 systemd[1]: Starting ignition-disks.service... Feb 9 19:14:44.055776 kernel: audit: type=1130 audit(1707506084.044:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.062832 ignition[1131]: Ignition 2.14.0 Feb 9 19:14:44.062862 ignition[1131]: Stage: disks Feb 9 19:14:44.063162 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:44.063220 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:44.077895 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:44.080224 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:44.083530 ignition[1131]: INFO : PUT result: OK Feb 9 19:14:44.089511 ignition[1131]: disks: disks passed Feb 9 19:14:44.089628 ignition[1131]: Ignition finished successfully Feb 9 19:14:44.094260 systemd[1]: Finished ignition-disks.service. Feb 9 19:14:44.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.097530 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:14:44.110826 kernel: audit: type=1130 audit(1707506084.096:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.106086 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:44.107733 systemd[1]: Reached target local-fs.target. Feb 9 19:14:44.113733 systemd[1]: Reached target sysinit.target. Feb 9 19:14:44.115360 systemd[1]: Reached target basic.target. Feb 9 19:14:44.120977 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:14:44.171920 systemd-fsck[1139]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 19:14:44.179609 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:14:44.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.190824 kernel: audit: type=1130 audit(1707506084.180:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.191198 systemd[1]: Mounting sysroot.mount... Feb 9 19:14:44.207839 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:14:44.208431 systemd[1]: Mounted sysroot.mount. Feb 9 19:14:44.211141 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:14:44.226250 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:14:44.228436 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:14:44.228513 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:14:44.228566 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:14:44.236604 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:14:44.253598 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:44.261997 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:14:44.277822 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1156) Feb 9 19:14:44.283721 initrd-setup-root[1161]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:14:44.288504 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:44.288546 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:44.290975 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:44.298814 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:44.303812 initrd-setup-root[1187]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:14:44.307649 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:44.316152 initrd-setup-root[1195]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:14:44.324906 initrd-setup-root[1203]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:14:44.527808 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:14:44.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.532698 systemd[1]: Starting ignition-mount.service... Feb 9 19:14:44.546486 kernel: audit: type=1130 audit(1707506084.530:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.542415 systemd[1]: Starting sysroot-boot.service... Feb 9 19:14:44.554864 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:44.555034 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:44.577934 ignition[1222]: INFO : Ignition 2.14.0 Feb 9 19:14:44.579906 ignition[1222]: INFO : Stage: mount Feb 9 19:14:44.581948 ignition[1222]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:44.584530 ignition[1222]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:44.606114 systemd[1]: Finished sysroot-boot.service. Feb 9 19:14:44.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.616819 kernel: audit: type=1130 audit(1707506084.608:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.624231 ignition[1222]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:44.626759 ignition[1222]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:44.629962 ignition[1222]: INFO : PUT result: OK Feb 9 19:14:44.635487 ignition[1222]: INFO : mount: mount passed Feb 9 19:14:44.637103 ignition[1222]: INFO : Ignition finished successfully Feb 9 19:14:44.639427 systemd[1]: Finished ignition-mount.service. Feb 9 19:14:44.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.643769 systemd[1]: Starting ignition-files.service... Feb 9 19:14:44.652502 kernel: audit: type=1130 audit(1707506084.641:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.659845 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:44.678842 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1231) Feb 9 19:14:44.684049 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:44.684082 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:44.684106 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:44.692820 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:44.697349 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:44.715200 ignition[1250]: INFO : Ignition 2.14.0 Feb 9 19:14:44.715200 ignition[1250]: INFO : Stage: files Feb 9 19:14:44.718942 ignition[1250]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:44.718942 ignition[1250]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:44.732327 ignition[1250]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:44.735123 ignition[1250]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:44.738347 ignition[1250]: INFO : PUT result: OK Feb 9 19:14:44.744223 ignition[1250]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:14:44.748336 ignition[1250]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:14:44.751036 ignition[1250]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:14:44.777314 ignition[1250]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:14:44.780109 ignition[1250]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:14:44.783849 unknown[1250]: wrote ssh authorized keys file for user: core Feb 9 19:14:44.786108 ignition[1250]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:14:44.789584 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 19:14:44.793333 ignition[1250]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 19:14:44.895968 systemd-networkd[1095]: eth0: Gained IPv6LL Feb 9 19:14:45.299019 ignition[1250]: INFO : GET result: OK Feb 9 19:14:45.865343 ignition[1250]: DEBUG : file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 19:14:45.870681 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 19:14:45.870681 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:45.870681 ignition[1250]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:45.912209 ignition[1250]: INFO : GET result: OK Feb 9 19:14:46.015013 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:46.018816 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 19:14:46.018816 ignition[1250]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:46.388964 ignition[1250]: INFO : GET result: OK Feb 9 19:14:46.668838 ignition[1250]: DEBUG : file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 19:14:46.673353 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 19:14:46.673353 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:46.673353 ignition[1250]: INFO : GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Feb 9 19:14:46.786814 ignition[1250]: INFO : GET result: OK Feb 9 19:14:47.403027 ignition[1250]: DEBUG : file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Feb 9 19:14:47.408328 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:47.408328 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:47.408328 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:47.408328 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:47.408328 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:47.438093 ignition[1250]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem497593142" Feb 9 19:14:47.438093 ignition[1250]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem497593142": device or resource busy Feb 9 19:14:47.438093 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem497593142", trying btrfs: device or resource busy Feb 9 19:14:47.438093 ignition[1250]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem497593142" Feb 9 19:14:47.450421 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1255) Feb 9 19:14:47.450462 ignition[1250]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem497593142" Feb 9 19:14:47.460494 ignition[1250]: INFO : op(3): [started] unmounting "/mnt/oem497593142" Feb 9 19:14:47.462781 ignition[1250]: INFO : op(3): [finished] unmounting "/mnt/oem497593142" Feb 9 19:14:47.464918 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:47.464918 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:47.471641 ignition[1250]: INFO : GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 19:14:47.477040 systemd[1]: mnt-oem497593142.mount: Deactivated successfully. Feb 9 19:14:47.533265 ignition[1250]: INFO : GET result: OK Feb 9 19:14:48.145423 ignition[1250]: DEBUG : file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 19:14:48.149915 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:48.149915 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:48.149915 ignition[1250]: INFO : GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 19:14:48.222923 ignition[1250]: INFO : GET result: OK Feb 9 19:14:50.732746 ignition[1250]: DEBUG : file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 19:14:50.737687 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:50.737687 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:50.744542 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:50.744542 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:50.751298 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:50.754662 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:50.758148 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:50.761519 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:14:50.764994 ignition[1250]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:51.172938 ignition[1250]: INFO : GET result: OK Feb 9 19:14:51.315857 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:14:51.322126 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:51.322126 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:51.322126 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:51.322126 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:51.322126 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:51.322126 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:51.371595 ignition[1250]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1140773350" Feb 9 19:14:51.371595 ignition[1250]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1140773350": device or resource busy Feb 9 19:14:51.371595 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1140773350", trying btrfs: device or resource busy Feb 9 19:14:51.371595 ignition[1250]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1140773350" Feb 9 19:14:51.371595 ignition[1250]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1140773350" Feb 9 19:14:51.371595 ignition[1250]: INFO : op(6): [started] unmounting "/mnt/oem1140773350" Feb 9 19:14:51.371595 ignition[1250]: INFO : op(6): [finished] unmounting "/mnt/oem1140773350" Feb 9 19:14:51.371595 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:51.371595 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:51.371595 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:51.357470 systemd[1]: mnt-oem1140773350.mount: Deactivated successfully. Feb 9 19:14:51.424358 ignition[1250]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem173915095" Feb 9 19:14:51.424358 ignition[1250]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem173915095": device or resource busy Feb 9 19:14:51.424358 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem173915095", trying btrfs: device or resource busy Feb 9 19:14:51.424358 ignition[1250]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem173915095" Feb 9 19:14:51.424358 ignition[1250]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem173915095" Feb 9 19:14:51.424358 ignition[1250]: INFO : op(9): [started] unmounting "/mnt/oem173915095" Feb 9 19:14:51.424358 ignition[1250]: INFO : op(9): [finished] unmounting "/mnt/oem173915095" Feb 9 19:14:51.395553 systemd[1]: mnt-oem173915095.mount: Deactivated successfully. Feb 9 19:14:51.447837 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:51.447837 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:51.447837 ignition[1250]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:51.467451 ignition[1250]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2958402986" Feb 9 19:14:51.477629 ignition[1250]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2958402986": device or resource busy Feb 9 19:14:51.477629 ignition[1250]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2958402986", trying btrfs: device or resource busy Feb 9 19:14:51.477629 ignition[1250]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2958402986" Feb 9 19:14:51.477629 ignition[1250]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2958402986" Feb 9 19:14:51.477629 ignition[1250]: INFO : op(c): [started] unmounting "/mnt/oem2958402986" Feb 9 19:14:51.477629 ignition[1250]: INFO : op(c): [finished] unmounting "/mnt/oem2958402986" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(14): [started] processing unit "nvidia.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(14): [finished] processing unit "nvidia.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(16): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(16): op(17): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(16): op(17): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(16): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(18): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(18): op(19): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(18): op(19): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(18): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 19:14:51.512882 ignition[1250]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:51.579368 kernel: audit: type=1130 audit(1707506091.512:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.512069 systemd[1]: Finished ignition-files.service. Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(1e): [started] setting preset to enabled for "nvidia.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(1e): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(20): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(20): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:51.581000 ignition[1250]: INFO : files: files passed Feb 9 19:14:51.581000 ignition[1250]: INFO : Ignition finished successfully Feb 9 19:14:51.633575 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:14:51.642874 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:14:51.648148 systemd[1]: Starting ignition-quench.service... Feb 9 19:14:51.656317 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:14:51.658604 systemd[1]: Finished ignition-quench.service. Feb 9 19:14:51.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.676136 kernel: audit: type=1130 audit(1707506091.657:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.676199 kernel: audit: type=1131 audit(1707506091.657:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.676238 initrd-setup-root-after-ignition[1275]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:14:51.690532 kernel: audit: type=1130 audit(1707506091.677:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.671537 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:14:51.678304 systemd[1]: Reached target ignition-complete.target. Feb 9 19:14:51.692529 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:14:51.724173 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:14:51.726349 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:14:51.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.729735 systemd[1]: Reached target initrd-fs.target. Feb 9 19:14:51.750423 kernel: audit: type=1130 audit(1707506091.727:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.750462 kernel: audit: type=1131 audit(1707506091.727:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.745534 systemd[1]: Reached target initrd.target. Feb 9 19:14:51.745708 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:14:51.749035 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:14:51.778370 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:14:51.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.782923 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:14:51.791746 kernel: audit: type=1130 audit(1707506091.780:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.804147 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:14:51.807703 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:14:51.811350 systemd[1]: Stopped target timers.target. Feb 9 19:14:51.814358 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:14:51.815245 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:14:51.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.826825 systemd[1]: Stopped target initrd.target. Feb 9 19:14:51.828605 kernel: audit: type=1131 audit(1707506091.817:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.830355 systemd[1]: Stopped target basic.target. Feb 9 19:14:51.833452 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:14:51.836968 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:14:51.840447 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:14:51.844077 systemd[1]: Stopped target remote-fs.target. Feb 9 19:14:51.847114 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:14:51.850631 systemd[1]: Stopped target sysinit.target. Feb 9 19:14:51.853570 systemd[1]: Stopped target local-fs.target. Feb 9 19:14:51.869164 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:14:51.872560 systemd[1]: Stopped target swap.target. Feb 9 19:14:51.875454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:14:51.877544 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:14:51.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.880925 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:14:51.889169 kernel: audit: type=1131 audit(1707506091.879:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.890926 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:14:51.893059 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:14:51.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.896529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:14:51.905542 kernel: audit: type=1131 audit(1707506091.895:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.896750 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:14:51.909345 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:14:51.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.910652 systemd[1]: Stopped ignition-files.service. Feb 9 19:14:51.916225 systemd[1]: Stopping ignition-mount.service... Feb 9 19:14:51.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.919812 systemd[1]: Stopping iscsiuio.service... Feb 9 19:14:51.922712 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:14:51.923585 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:14:51.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.931913 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:14:51.933414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:14:51.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.933704 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:14:51.937345 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:14:51.937599 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:14:51.948668 ignition[1288]: INFO : Ignition 2.14.0 Feb 9 19:14:51.950462 ignition[1288]: INFO : Stage: umount Feb 9 19:14:51.950462 ignition[1288]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:51.950462 ignition[1288]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:51.966844 ignition[1288]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:51.969338 ignition[1288]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:51.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.972028 ignition[1288]: INFO : PUT result: OK Feb 9 19:14:51.977048 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:14:51.978892 systemd[1]: Stopped iscsiuio.service. Feb 9 19:14:51.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.985568 ignition[1288]: INFO : umount: umount passed Feb 9 19:14:51.985568 ignition[1288]: INFO : Ignition finished successfully Feb 9 19:14:51.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.983913 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:14:51.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.984550 systemd[1]: Stopped ignition-mount.service. Feb 9 19:14:51.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.990410 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:14:52.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.990617 systemd[1]: Stopped ignition-disks.service. Feb 9 19:14:51.994394 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:14:51.994600 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:14:52.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:51.998188 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:14:51.998509 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:14:52.001397 systemd[1]: Stopped target network.target. Feb 9 19:14:52.002972 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:14:52.003078 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:14:52.004954 systemd[1]: Stopped target paths.target. Feb 9 19:14:52.008069 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:14:52.014529 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:14:52.017454 systemd[1]: Stopped target slices.target. Feb 9 19:14:52.018908 systemd[1]: Stopped target sockets.target. Feb 9 19:14:52.020564 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:14:52.020626 systemd[1]: Closed iscsid.socket. Feb 9 19:14:52.023975 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:14:52.024053 systemd[1]: Closed iscsiuio.socket. Feb 9 19:14:52.025489 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:14:52.025578 systemd[1]: Stopped ignition-setup.service. Feb 9 19:14:52.028048 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:14:52.033778 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:14:52.035919 systemd-networkd[1095]: eth0: DHCPv6 lease lost Feb 9 19:14:52.038292 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:14:52.038491 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:14:52.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.068442 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:14:52.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.068648 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:14:52.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.072729 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:14:52.085000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:14:52.085000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:14:52.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.072939 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:14:52.076449 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:14:52.076534 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:14:52.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.079332 systemd[1]: Stopping network-cleanup.service... Feb 9 19:14:52.081827 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:14:52.081948 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:14:52.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.083935 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:14:52.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.084020 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:14:52.087089 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:14:52.087177 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:14:52.089133 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:14:52.099070 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:14:52.101139 systemd[1]: Stopped network-cleanup.service. Feb 9 19:14:52.104276 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:14:52.104542 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:14:52.107082 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:14:52.107167 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:14:52.110039 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:14:52.110114 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:14:52.113276 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:14:52.113362 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:14:52.115276 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:14:52.115359 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:14:52.118171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:14:52.118250 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:14:52.123368 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:14:52.131660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:14:52.131840 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:14:52.139943 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:14:52.140123 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:14:52.340481 systemd[1]: mnt-oem2958402986.mount: Deactivated successfully. Feb 9 19:14:52.340661 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:14:52.340927 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:14:52.392655 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:14:52.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.393341 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:14:52.400072 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:14:52.400163 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:14:52.400249 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:14:52.401728 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:14:52.421731 systemd[1]: Switching root. Feb 9 19:14:52.453583 iscsid[1100]: iscsid shutting down. Feb 9 19:14:52.455231 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Feb 9 19:14:52.455306 systemd-journald[308]: Journal stopped Feb 9 19:14:57.440466 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:14:57.440589 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:14:57.440629 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:14:57.440670 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:14:57.440703 kernel: SELinux: policy capability open_perms=1 Feb 9 19:14:57.440734 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:14:57.440764 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:14:57.440832 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:14:57.440866 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:14:57.440896 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:14:57.440935 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:14:57.440970 systemd[1]: Successfully loaded SELinux policy in 93.661ms. Feb 9 19:14:57.441026 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.538ms. Feb 9 19:14:57.441061 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:57.441092 systemd[1]: Detected virtualization amazon. Feb 9 19:14:57.441121 systemd[1]: Detected architecture arm64. Feb 9 19:14:57.441153 systemd[1]: Detected first boot. Feb 9 19:14:57.441182 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:57.441213 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:14:57.441247 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:14:57.441279 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:14:57.445934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:14:57.445970 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:14:57.453076 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 9 19:14:57.453131 kernel: audit: type=1334 audit(1707506097.061:87): prog-id=12 op=LOAD Feb 9 19:14:57.453163 kernel: audit: type=1334 audit(1707506097.064:88): prog-id=3 op=UNLOAD Feb 9 19:14:57.453195 kernel: audit: type=1334 audit(1707506097.066:89): prog-id=13 op=LOAD Feb 9 19:14:57.453231 kernel: audit: type=1334 audit(1707506097.069:90): prog-id=14 op=LOAD Feb 9 19:14:57.453261 kernel: audit: type=1334 audit(1707506097.069:91): prog-id=4 op=UNLOAD Feb 9 19:14:57.453291 kernel: audit: type=1334 audit(1707506097.069:92): prog-id=5 op=UNLOAD Feb 9 19:14:57.453321 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:14:57.453354 kernel: audit: type=1334 audit(1707506097.071:93): prog-id=15 op=LOAD Feb 9 19:14:57.453383 systemd[1]: Stopped iscsid.service. Feb 9 19:14:57.453412 kernel: audit: type=1334 audit(1707506097.071:94): prog-id=12 op=UNLOAD Feb 9 19:14:57.453442 kernel: audit: type=1334 audit(1707506097.074:95): prog-id=16 op=LOAD Feb 9 19:14:57.453474 kernel: audit: type=1334 audit(1707506097.076:96): prog-id=17 op=LOAD Feb 9 19:14:57.453506 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:14:57.453539 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:14:57.453571 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:14:57.453604 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:14:57.453634 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:14:57.453664 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:14:57.453693 systemd[1]: Created slice system-getty.slice. Feb 9 19:14:57.453726 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:14:57.453761 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:14:57.453815 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:14:57.453851 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:14:57.453883 systemd[1]: Created slice user.slice. Feb 9 19:14:57.453915 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:57.453944 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:14:57.453977 systemd[1]: Set up automount boot.automount. Feb 9 19:14:57.454009 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:14:57.454046 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:14:57.454077 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:14:57.454108 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:14:57.454139 systemd[1]: Reached target integritysetup.target. Feb 9 19:14:57.454172 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:57.454203 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:57.454233 systemd[1]: Reached target slices.target. Feb 9 19:14:57.454262 systemd[1]: Reached target swap.target. Feb 9 19:14:57.454292 systemd[1]: Reached target torcx.target. Feb 9 19:14:57.454324 systemd[1]: Reached target veritysetup.target. Feb 9 19:14:57.454353 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:14:57.454393 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:14:57.454431 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:57.454463 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:57.454494 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:57.454525 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:14:57.454554 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:14:57.454586 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:14:57.454617 systemd[1]: Mounting media.mount... Feb 9 19:14:57.454651 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:14:57.454680 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:14:57.454709 systemd[1]: Mounting tmp.mount... Feb 9 19:14:57.454740 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:14:57.454770 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:14:57.454818 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:57.454853 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:14:57.454884 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:14:57.454914 systemd[1]: Starting modprobe@drm.service... Feb 9 19:14:57.454948 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:14:57.454977 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:14:57.455006 systemd[1]: Starting modprobe@loop.service... Feb 9 19:14:57.455037 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:14:57.455066 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:14:57.455097 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:14:57.455129 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:14:57.455158 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:14:57.455188 systemd[1]: Stopped systemd-journald.service. Feb 9 19:14:57.455220 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:57.455249 kernel: fuse: init (API version 7.34) Feb 9 19:14:57.455277 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:57.455307 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:14:57.455338 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:14:57.455383 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:57.455418 kernel: loop: module loaded Feb 9 19:14:57.455448 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:14:57.455477 systemd[1]: Stopped verity-setup.service. Feb 9 19:14:57.455513 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:14:57.455543 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:14:57.455577 systemd[1]: Mounted media.mount. Feb 9 19:14:57.455618 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:14:57.455651 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:14:57.455684 systemd[1]: Mounted tmp.mount. Feb 9 19:14:57.455713 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:57.455743 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:14:57.455774 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:14:57.455821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:14:57.455853 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:14:57.455882 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:14:57.455914 systemd-journald[1406]: Journal started Feb 9 19:14:57.462009 systemd-journald[1406]: Runtime Journal (/run/log/journal/ec2b6c7f868383a576c2aac397401933) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:57.462111 systemd[1]: Finished modprobe@drm.service. Feb 9 19:14:53.012000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:14:53.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:14:53.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:14:53.193000 audit: BPF prog-id=10 op=LOAD Feb 9 19:14:53.193000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:14:53.193000 audit: BPF prog-id=11 op=LOAD Feb 9 19:14:53.193000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:14:53.371000 audit[1324]: AVC avc: denied { associate } for pid=1324 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:14:53.371000 audit[1324]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458d4 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1307 pid=1324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:53.371000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:14:53.375000 audit[1324]: AVC avc: denied { associate } for pid=1324 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:14:53.375000 audit[1324]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001459b9 a2=1ed a3=0 items=2 ppid=1307 pid=1324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:53.375000 audit: CWD cwd="/" Feb 9 19:14:53.375000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:14:53.375000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:14:53.375000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:14:57.061000 audit: BPF prog-id=12 op=LOAD Feb 9 19:14:57.064000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:14:57.066000 audit: BPF prog-id=13 op=LOAD Feb 9 19:14:57.069000 audit: BPF prog-id=14 op=LOAD Feb 9 19:14:57.069000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:14:57.069000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:14:57.071000 audit: BPF prog-id=15 op=LOAD Feb 9 19:14:57.071000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:14:57.074000 audit: BPF prog-id=16 op=LOAD Feb 9 19:14:57.076000 audit: BPF prog-id=17 op=LOAD Feb 9 19:14:57.076000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:14:57.076000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:14:57.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.090000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:14:57.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.346000 audit: BPF prog-id=18 op=LOAD Feb 9 19:14:57.346000 audit: BPF prog-id=19 op=LOAD Feb 9 19:14:57.346000 audit: BPF prog-id=20 op=LOAD Feb 9 19:14:57.346000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:14:57.346000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:14:57.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.432000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:14:57.432000 audit[1406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffca247590 a2=4000 a3=1 items=0 ppid=1 pid=1406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:57.432000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:14:57.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.059174 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:14:57.472319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:14:57.472400 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:14:57.472438 systemd[1]: Started systemd-journald.service. Feb 9 19:14:57.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.352176 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:14:57.078835 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:14:53.360657 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:14:53.360709 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:14:53.360779 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:14:53.360842 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:14:57.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.360912 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:14:57.474933 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:14:53.360944 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:14:57.475218 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:14:53.361350 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:14:57.477615 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:14:53.361428 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:14:53.361465 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:14:53.362302 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:14:53.362384 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:14:53.362428 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:14:53.362468 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:14:53.362512 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:14:53.362551 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:14:57.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:56.239498 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:56Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:57.480950 systemd[1]: Finished modprobe@loop.service. Feb 9 19:14:56.240051 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:56Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:57.484186 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:56.240304 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:56Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:56.240747 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:56Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:56.240874 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:56Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:14:57.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:56.241017 /usr/lib/systemd/system-generators/torcx-generator[1324]: time="2024-02-09T19:14:56Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:14:57.489492 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:14:57.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.494190 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:14:57.497067 systemd[1]: Reached target network-pre.target. Feb 9 19:14:57.501508 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:14:57.505899 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:14:57.512675 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:14:57.516660 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:14:57.520561 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:14:57.523040 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:14:57.525304 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:14:57.527318 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:14:57.532152 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:57.536751 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:14:57.539284 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:14:57.569002 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:14:57.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.571641 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:14:57.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.573613 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:14:57.576501 systemd-journald[1406]: Time spent on flushing to /var/log/journal/ec2b6c7f868383a576c2aac397401933 is 77.513ms for 1180 entries. Feb 9 19:14:57.576501 systemd-journald[1406]: System Journal (/var/log/journal/ec2b6c7f868383a576c2aac397401933) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:14:57.683220 systemd-journald[1406]: Received client request to flush runtime journal. Feb 9 19:14:57.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.579539 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:14:57.615432 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:57.685000 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:14:57.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.704213 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:14:57.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.729458 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:57.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.733440 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:14:57.749300 udevadm[1442]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:14:58.413603 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:14:58.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.415000 audit: BPF prog-id=21 op=LOAD Feb 9 19:14:58.415000 audit: BPF prog-id=22 op=LOAD Feb 9 19:14:58.415000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:14:58.415000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:14:58.418247 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:58.454660 systemd-udevd[1443]: Using default interface naming scheme 'v252'. Feb 9 19:14:58.495624 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:58.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.497000 audit: BPF prog-id=23 op=LOAD Feb 9 19:14:58.500397 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:58.507000 audit: BPF prog-id=24 op=LOAD Feb 9 19:14:58.507000 audit: BPF prog-id=25 op=LOAD Feb 9 19:14:58.507000 audit: BPF prog-id=26 op=LOAD Feb 9 19:14:58.510842 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:14:58.584374 systemd[1]: Started systemd-userdbd.service. Feb 9 19:14:58.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.590513 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:14:58.593975 (udev-worker)[1450]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:58.733064 systemd-networkd[1449]: lo: Link UP Feb 9 19:14:58.733615 systemd-networkd[1449]: lo: Gained carrier Feb 9 19:14:58.734705 systemd-networkd[1449]: Enumeration completed Feb 9 19:14:58.737604 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:58.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.741655 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:14:58.745023 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:58.750557 systemd-networkd[1449]: eth0: Link UP Feb 9 19:14:58.750812 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:14:58.751179 systemd-networkd[1449]: eth0: Gained carrier Feb 9 19:14:58.762052 systemd-networkd[1449]: eth0: DHCPv4 address 172.31.23.244/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:58.809852 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1475) Feb 9 19:14:58.951306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:14:58.954075 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:14:58.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.958486 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:14:59.002510 lvm[1562]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:59.043351 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:14:59.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.045463 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:59.049418 systemd[1]: Starting lvm2-activation.service... Feb 9 19:14:59.057809 lvm[1563]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:59.095433 systemd[1]: Finished lvm2-activation.service. Feb 9 19:14:59.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.097392 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:59.099151 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:14:59.099199 systemd[1]: Reached target local-fs.target. Feb 9 19:14:59.100854 systemd[1]: Reached target machines.target. Feb 9 19:14:59.104585 systemd[1]: Starting ldconfig.service... Feb 9 19:14:59.107482 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:14:59.107593 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:59.109860 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:14:59.113616 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:14:59.118477 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:14:59.120578 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:59.120711 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:59.124123 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:14:59.136337 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1565 (bootctl) Feb 9 19:14:59.138611 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:14:59.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.168982 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:14:59.182840 systemd-tmpfiles[1568]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:14:59.199972 systemd-tmpfiles[1568]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:14:59.215424 systemd-tmpfiles[1568]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:14:59.232261 systemd-fsck[1574]: fsck.fat 4.2 (2021-01-31) Feb 9 19:14:59.232261 systemd-fsck[1574]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 19:14:59.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.234526 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:14:59.239753 systemd[1]: Mounting boot.mount... Feb 9 19:14:59.264728 systemd[1]: Mounted boot.mount. Feb 9 19:14:59.293507 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:14:59.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.581263 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:14:59.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.585542 systemd[1]: Starting audit-rules.service... Feb 9 19:14:59.597000 audit: BPF prog-id=27 op=LOAD Feb 9 19:14:59.604000 audit: BPF prog-id=28 op=LOAD Feb 9 19:14:59.590197 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:14:59.594629 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:14:59.603010 systemd[1]: Starting systemd-resolved.service... Feb 9 19:14:59.608716 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:14:59.613943 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:14:59.632464 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:14:59.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.634586 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:14:59.661000 audit[1596]: SYSTEM_BOOT pid=1596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.669540 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:14:59.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.686530 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:14:59.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.822322 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:14:59.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.824223 systemd[1]: Reached target time-set.target. Feb 9 19:14:59.889105 systemd-resolved[1594]: Positive Trust Anchors: Feb 9 19:14:59.889127 systemd-resolved[1594]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:14:59.889178 systemd-resolved[1594]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:14:59.900000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:14:59.900000 audit[1611]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff0132530 a2=420 a3=0 items=0 ppid=1590 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:59.900000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:14:59.902502 augenrules[1611]: No rules Feb 9 19:14:59.903348 systemd[1]: Finished audit-rules.service. Feb 9 19:14:59.953503 systemd-timesyncd[1595]: Contacted time server 149.248.11.87:123 (0.flatcar.pool.ntp.org). Feb 9 19:14:59.953631 systemd-timesyncd[1595]: Initial clock synchronization to Fri 2024-02-09 19:15:00.233732 UTC. Feb 9 19:14:59.999977 systemd-networkd[1449]: eth0: Gained IPv6LL Feb 9 19:15:00.003874 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:15:00.073205 systemd-resolved[1594]: Defaulting to hostname 'linux'. Feb 9 19:15:00.077855 systemd[1]: Started systemd-resolved.service. Feb 9 19:15:00.079985 systemd[1]: Reached target network.target. Feb 9 19:15:00.081791 systemd[1]: Reached target network-online.target. Feb 9 19:15:00.083748 systemd[1]: Reached target nss-lookup.target. Feb 9 19:15:00.523105 ldconfig[1564]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:15:00.532444 systemd[1]: Finished ldconfig.service. Feb 9 19:15:00.536360 systemd[1]: Starting systemd-update-done.service... Feb 9 19:15:00.551495 systemd[1]: Finished systemd-update-done.service. Feb 9 19:15:00.553755 systemd[1]: Reached target sysinit.target. Feb 9 19:15:00.555977 systemd[1]: Started motdgen.path. Feb 9 19:15:00.557924 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:15:00.560456 systemd[1]: Started logrotate.timer. Feb 9 19:15:00.562204 systemd[1]: Started mdadm.timer. Feb 9 19:15:00.563673 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:15:00.565554 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:15:00.565622 systemd[1]: Reached target paths.target. Feb 9 19:15:00.567288 systemd[1]: Reached target timers.target. Feb 9 19:15:00.569469 systemd[1]: Listening on dbus.socket. Feb 9 19:15:00.574355 systemd[1]: Starting docker.socket... Feb 9 19:15:00.582105 systemd[1]: Listening on sshd.socket. Feb 9 19:15:00.584211 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:15:00.585281 systemd[1]: Listening on docker.socket. Feb 9 19:15:00.587212 systemd[1]: Reached target sockets.target. Feb 9 19:15:00.588935 systemd[1]: Reached target basic.target. Feb 9 19:15:00.590571 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:15:00.590638 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:15:00.593592 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:15:00.598411 systemd[1]: Starting containerd.service... Feb 9 19:15:00.602227 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:15:00.620147 systemd[1]: Starting dbus.service... Feb 9 19:15:00.634379 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:15:00.659918 systemd[1]: Starting extend-filesystems.service... Feb 9 19:15:00.662144 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:15:00.673691 systemd[1]: Starting motdgen.service... Feb 9 19:15:00.680294 jq[1626]: false Feb 9 19:15:00.684379 systemd[1]: Started nvidia.service. Feb 9 19:15:00.688691 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:15:00.694331 systemd[1]: Starting prepare-critools.service... Feb 9 19:15:00.701055 systemd[1]: Starting prepare-helm.service... Feb 9 19:15:00.711264 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:15:00.715323 systemd[1]: Starting sshd-keygen.service... Feb 9 19:15:00.723869 systemd[1]: Starting systemd-logind.service... Feb 9 19:15:00.725757 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:15:00.725967 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:15:00.727776 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:15:00.817532 jq[1641]: true Feb 9 19:15:00.730431 systemd[1]: Starting update-engine.service... Feb 9 19:15:00.736080 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:15:00.742034 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:15:00.836228 tar[1643]: ./ Feb 9 19:15:00.836228 tar[1643]: ./loopback Feb 9 19:15:00.838731 tar[1644]: crictl Feb 9 19:15:00.746283 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:15:00.753133 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:15:00.865251 tar[1645]: linux-arm64/helm Feb 9 19:15:00.753511 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:15:00.833585 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:15:00.877680 jq[1658]: true Feb 9 19:15:00.833948 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:15:00.971914 dbus-daemon[1625]: [system] SELinux support is enabled Feb 9 19:15:00.978598 systemd[1]: Started dbus.service. Feb 9 19:15:00.983761 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:15:00.983810 systemd[1]: Reached target system-config.target. Feb 9 19:15:00.985787 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:15:00.985840 systemd[1]: Reached target user-config.target. Feb 9 19:15:00.990420 extend-filesystems[1627]: Found nvme0n1 Feb 9 19:15:00.994004 extend-filesystems[1627]: Found nvme0n1p1 Feb 9 19:15:00.997800 extend-filesystems[1627]: Found nvme0n1p2 Feb 9 19:15:00.999672 extend-filesystems[1627]: Found nvme0n1p3 Feb 9 19:15:01.005290 extend-filesystems[1627]: Found usr Feb 9 19:15:01.013007 amazon-ssm-agent[1622]: 2024/02/09 19:15:01 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:15:01.014004 extend-filesystems[1627]: Found nvme0n1p4 Feb 9 19:15:01.016680 extend-filesystems[1627]: Found nvme0n1p6 Feb 9 19:15:01.018889 amazon-ssm-agent[1622]: Initializing new seelog logger Feb 9 19:15:01.020493 extend-filesystems[1627]: Found nvme0n1p7 Feb 9 19:15:01.026807 extend-filesystems[1627]: Found nvme0n1p9 Feb 9 19:15:01.031571 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:15:01.033920 amazon-ssm-agent[1622]: New Seelog Logger Creation Complete Feb 9 19:15:01.031955 systemd[1]: Finished motdgen.service. Feb 9 19:15:01.036773 amazon-ssm-agent[1622]: 2024/02/09 19:15:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:15:01.036773 amazon-ssm-agent[1622]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:15:01.037555 amazon-ssm-agent[1622]: 2024/02/09 19:15:01 processing appconfig overrides Feb 9 19:15:01.037918 extend-filesystems[1627]: Checking size of /dev/nvme0n1p9 Feb 9 19:15:01.049102 bash[1688]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:15:01.040500 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:15:01.063703 dbus-daemon[1625]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1449 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:15:01.070154 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:15:01.142118 extend-filesystems[1627]: Resized partition /dev/nvme0n1p9 Feb 9 19:15:01.159032 update_engine[1640]: I0209 19:15:01.158448 1640 main.cc:92] Flatcar Update Engine starting Feb 9 19:15:01.163377 extend-filesystems[1697]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:15:01.163225 systemd[1]: Started update-engine.service. Feb 9 19:15:01.168036 systemd[1]: Started locksmithd.service. Feb 9 19:15:01.173684 update_engine[1640]: I0209 19:15:01.173418 1640 update_check_scheduler.cc:74] Next update check in 9m19s Feb 9 19:15:01.204855 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:15:01.227332 env[1649]: time="2024-02-09T19:15:01.227253530Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:15:01.261867 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:15:01.292942 extend-filesystems[1697]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:15:01.292942 extend-filesystems[1697]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:15:01.292942 extend-filesystems[1697]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:15:01.341601 extend-filesystems[1627]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:15:01.300189 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:15:01.300584 systemd[1]: Finished extend-filesystems.service. Feb 9 19:15:01.306853 systemd-logind[1638]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 19:15:01.312159 systemd-logind[1638]: New seat seat0. Feb 9 19:15:01.348612 systemd[1]: Started systemd-logind.service. Feb 9 19:15:01.355493 tar[1643]: ./bandwidth Feb 9 19:15:01.383635 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:15:01.512216 env[1649]: time="2024-02-09T19:15:01.512150904Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:15:01.521092 env[1649]: time="2024-02-09T19:15:01.521036242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:01.527638 env[1649]: time="2024-02-09T19:15:01.527555131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:15:01.527916 env[1649]: time="2024-02-09T19:15:01.527880026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:01.528491 env[1649]: time="2024-02-09T19:15:01.528439271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:15:01.536218 env[1649]: time="2024-02-09T19:15:01.536158514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:01.537265 env[1649]: time="2024-02-09T19:15:01.537194336Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:15:01.538283 env[1649]: time="2024-02-09T19:15:01.538222193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:01.539133 env[1649]: time="2024-02-09T19:15:01.539071926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:01.546921 env[1649]: time="2024-02-09T19:15:01.546815126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:01.549861 env[1649]: time="2024-02-09T19:15:01.549758209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:15:01.553280 env[1649]: time="2024-02-09T19:15:01.553012100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:15:01.554144 env[1649]: time="2024-02-09T19:15:01.554091148Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:15:01.555392 env[1649]: time="2024-02-09T19:15:01.555334110Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:15:01.560732 dbus-daemon[1625]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:15:01.560979 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:15:01.572906 dbus-daemon[1625]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1693 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:15:01.577605 systemd[1]: Starting polkit.service... Feb 9 19:15:01.591960 env[1649]: time="2024-02-09T19:15:01.591780267Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:15:01.592188 env[1649]: time="2024-02-09T19:15:01.592154462Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:15:01.592493 env[1649]: time="2024-02-09T19:15:01.592440175Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:15:01.592956 env[1649]: time="2024-02-09T19:15:01.592920104Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.593252 env[1649]: time="2024-02-09T19:15:01.593206721Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.593506 env[1649]: time="2024-02-09T19:15:01.593475280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.593776 env[1649]: time="2024-02-09T19:15:01.593735466Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.594652 env[1649]: time="2024-02-09T19:15:01.594607041Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.595213 env[1649]: time="2024-02-09T19:15:01.595176762Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.595358 env[1649]: time="2024-02-09T19:15:01.595326427Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.595494 env[1649]: time="2024-02-09T19:15:01.595464454Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.595622 env[1649]: time="2024-02-09T19:15:01.595591635Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:15:01.596014 env[1649]: time="2024-02-09T19:15:01.595980213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:15:01.596300 env[1649]: time="2024-02-09T19:15:01.596269971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:15:01.597072 env[1649]: time="2024-02-09T19:15:01.597036974Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:15:01.597250 env[1649]: time="2024-02-09T19:15:01.597220034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.597376 env[1649]: time="2024-02-09T19:15:01.597346101Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:15:01.597588 env[1649]: time="2024-02-09T19:15:01.597558039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.597894 env[1649]: time="2024-02-09T19:15:01.597859386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.600756 env[1649]: time="2024-02-09T19:15:01.600702944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.601140 env[1649]: time="2024-02-09T19:15:01.601106760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.601296 env[1649]: time="2024-02-09T19:15:01.601264538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.601559 env[1649]: time="2024-02-09T19:15:01.601523339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.602474 tar[1643]: ./ptp Feb 9 19:15:01.602851 env[1649]: time="2024-02-09T19:15:01.601702416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.603024 env[1649]: time="2024-02-09T19:15:01.602986959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.603792 env[1649]: time="2024-02-09T19:15:01.603733641Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:15:01.609197 env[1649]: time="2024-02-09T19:15:01.604218109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.619015 env[1649]: time="2024-02-09T19:15:01.618683974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.619239 env[1649]: time="2024-02-09T19:15:01.619205224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.619520 env[1649]: time="2024-02-09T19:15:01.619488378Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:15:01.619694 env[1649]: time="2024-02-09T19:15:01.619659860Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:15:01.619962 env[1649]: time="2024-02-09T19:15:01.619931833Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:15:01.620123 env[1649]: time="2024-02-09T19:15:01.620091418Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:15:01.620369 env[1649]: time="2024-02-09T19:15:01.620314821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:15:01.621337 env[1649]: time="2024-02-09T19:15:01.621057447Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:15:01.622327 env[1649]: time="2024-02-09T19:15:01.621759691Z" level=info msg="Connect containerd service" Feb 9 19:15:01.622327 env[1649]: time="2024-02-09T19:15:01.621873749Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:15:01.624809 env[1649]: time="2024-02-09T19:15:01.624739618Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:15:01.630093 env[1649]: time="2024-02-09T19:15:01.630007766Z" level=info msg="Start subscribing containerd event" Feb 9 19:15:01.631302 env[1649]: time="2024-02-09T19:15:01.631244284Z" level=info msg="Start recovering state" Feb 9 19:15:01.632217 env[1649]: time="2024-02-09T19:15:01.632169412Z" level=info msg="Start event monitor" Feb 9 19:15:01.632381 env[1649]: time="2024-02-09T19:15:01.632352546Z" level=info msg="Start snapshots syncer" Feb 9 19:15:01.632658 env[1649]: time="2024-02-09T19:15:01.632627771Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:15:01.632993 polkitd[1722]: Started polkitd version 121 Feb 9 19:15:01.633565 env[1649]: time="2024-02-09T19:15:01.633523315Z" level=info msg="Start streaming server" Feb 9 19:15:01.636254 env[1649]: time="2024-02-09T19:15:01.636177802Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:15:01.639258 env[1649]: time="2024-02-09T19:15:01.639209020Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:15:01.660682 polkitd[1722]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:15:01.660810 polkitd[1722]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:15:01.664529 polkitd[1722]: Finished loading, compiling and executing 2 rules Feb 9 19:15:01.665392 dbus-daemon[1625]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:15:01.665648 systemd[1]: Started polkit.service. Feb 9 19:15:01.667613 polkitd[1722]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:15:01.691784 systemd[1]: Started containerd.service. Feb 9 19:15:01.695014 env[1649]: time="2024-02-09T19:15:01.694943037Z" level=info msg="containerd successfully booted in 0.472500s" Feb 9 19:15:01.718561 systemd-hostnamed[1693]: Hostname set to (transient) Feb 9 19:15:01.718745 systemd-resolved[1594]: System hostname changed to 'ip-172-31-23-244'. Feb 9 19:15:01.859692 tar[1643]: ./vlan Feb 9 19:15:01.948085 coreos-metadata[1624]: Feb 09 19:15:01.944 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:15:01.953323 coreos-metadata[1624]: Feb 09 19:15:01.953 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:15:01.960212 coreos-metadata[1624]: Feb 09 19:15:01.959 INFO Fetch successful Feb 9 19:15:01.960549 coreos-metadata[1624]: Feb 09 19:15:01.960 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:15:01.961987 coreos-metadata[1624]: Feb 09 19:15:01.961 INFO Fetch successful Feb 9 19:15:01.964736 unknown[1624]: wrote ssh authorized keys file for user: core Feb 9 19:15:02.005227 update-ssh-keys[1773]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:15:02.006944 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:15:02.072089 tar[1643]: ./host-device Feb 9 19:15:02.158398 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Create new startup processor Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing bookkeeping folders Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO removing the completed state files Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing healthcheck folders for long running plugins Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing locations for inventory plugin Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing default location for custom inventory Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing default location for file inventory Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Initializing default location for role inventory Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Init the cloudwatchlogs publisher Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:15:02.160915 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:15:02.161980 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:15:02.161980 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:15:02.161980 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO OS: linux, Arch: arm64 Feb 9 19:15:02.163789 amazon-ssm-agent[1622]: datastore file /var/lib/amazon/ssm/i-02d2f194d46317951/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:15:02.170354 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:15:02.282442 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:15:02.297393 tar[1643]: ./tuning Feb 9 19:15:02.377587 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:15:02.472203 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:15:02.494088 tar[1643]: ./vrf Feb 9 19:15:02.566979 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:15:02.637042 tar[1643]: ./sbr Feb 9 19:15:02.661987 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:15:02.757130 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-02d2f194d46317951, requestId: 1a198f59-239b-4706-bba1-7424494644ca Feb 9 19:15:02.771249 tar[1643]: ./tap Feb 9 19:15:02.852434 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [OfflineService] Starting document processing engine... Feb 9 19:15:02.904841 tar[1643]: ./dhcp Feb 9 19:15:02.948035 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:15:03.043827 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:15:03.128242 tar[1645]: linux-arm64/LICENSE Feb 9 19:15:03.128916 tar[1645]: linux-arm64/README.md Feb 9 19:15:03.140205 systemd[1]: Finished prepare-helm.service. Feb 9 19:15:03.143412 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [OfflineService] Starting message polling Feb 9 19:15:03.233381 systemd[1]: Finished prepare-critools.service. Feb 9 19:15:03.240355 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [OfflineService] Starting send replies to MDS Feb 9 19:15:03.240921 tar[1643]: ./static Feb 9 19:15:03.288068 tar[1643]: ./firewall Feb 9 19:15:03.336801 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:15:03.359257 tar[1643]: ./macvlan Feb 9 19:15:03.424631 tar[1643]: ./dummy Feb 9 19:15:03.433209 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:15:03.489352 tar[1643]: ./bridge Feb 9 19:15:03.529991 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] listening reply. Feb 9 19:15:03.559241 tar[1643]: ./ipvlan Feb 9 19:15:03.622324 tar[1643]: ./portmap Feb 9 19:15:03.626890 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:15:03.683894 tar[1643]: ./host-local Feb 9 19:15:03.724037 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:15:03.757655 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:15:03.821503 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [instanceID=i-02d2f194d46317951] Starting association polling Feb 9 19:15:03.841097 locksmithd[1702]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:15:03.920910 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:15:04.020656 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:15:04.120383 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:15:04.220409 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:15:04.320527 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:15:04.420804 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:15:04.521421 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:15:04.609974 sshd_keygen[1665]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:15:04.620256 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:15:04.647072 systemd[1]: Finished sshd-keygen.service. Feb 9 19:15:04.651579 systemd[1]: Starting issuegen.service... Feb 9 19:15:04.661794 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:15:04.662178 systemd[1]: Finished issuegen.service. Feb 9 19:15:04.666700 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:15:04.680518 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:15:04.685455 systemd[1]: Started getty@tty1.service. Feb 9 19:15:04.689802 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:15:04.692142 systemd[1]: Reached target getty.target. Feb 9 19:15:04.693968 systemd[1]: Reached target multi-user.target. Feb 9 19:15:04.698470 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:15:04.713716 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:15:04.714144 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:15:04.716323 systemd[1]: Startup finished in 1.126s (kernel) + 13.296s (initrd) + 11.828s (userspace) = 26.251s. Feb 9 19:15:04.719388 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:15:04.818896 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:15:04.920292 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:15:05.021793 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-02d2f194d46317951?role=subscribe&stream=input Feb 9 19:15:05.123453 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-02d2f194d46317951?role=subscribe&stream=input Feb 9 19:15:05.224101 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:15:05.326102 amazon-ssm-agent[1622]: 2024-02-09 19:15:02 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:15:10.554598 systemd[1]: Created slice system-sshd.slice. Feb 9 19:15:10.556925 systemd[1]: Started sshd@0-172.31.23.244:22-147.75.109.163:42194.service. Feb 9 19:15:10.739477 sshd[1841]: Accepted publickey for core from 147.75.109.163 port 42194 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:10.743377 sshd[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:10.760051 systemd[1]: Created slice user-500.slice. Feb 9 19:15:10.762480 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:15:10.770925 systemd-logind[1638]: New session 1 of user core. Feb 9 19:15:10.781915 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:15:10.786458 systemd[1]: Starting user@500.service... Feb 9 19:15:10.792376 (systemd)[1844]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:10.965512 systemd[1844]: Queued start job for default target default.target. Feb 9 19:15:10.966572 systemd[1844]: Reached target paths.target. Feb 9 19:15:10.966625 systemd[1844]: Reached target sockets.target. Feb 9 19:15:10.966657 systemd[1844]: Reached target timers.target. Feb 9 19:15:10.966686 systemd[1844]: Reached target basic.target. Feb 9 19:15:10.966779 systemd[1844]: Reached target default.target. Feb 9 19:15:10.966875 systemd[1844]: Startup finished in 163ms. Feb 9 19:15:10.967922 systemd[1]: Started user@500.service. Feb 9 19:15:10.971436 systemd[1]: Started session-1.scope. Feb 9 19:15:11.121455 systemd[1]: Started sshd@1-172.31.23.244:22-147.75.109.163:42198.service. Feb 9 19:15:11.303459 sshd[1853]: Accepted publickey for core from 147.75.109.163 port 42198 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:11.305839 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:11.314756 systemd[1]: Started session-2.scope. Feb 9 19:15:11.315508 systemd-logind[1638]: New session 2 of user core. Feb 9 19:15:11.447349 sshd[1853]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:11.452043 systemd[1]: sshd@1-172.31.23.244:22-147.75.109.163:42198.service: Deactivated successfully. Feb 9 19:15:11.453318 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:15:11.454418 systemd-logind[1638]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:15:11.455757 systemd-logind[1638]: Removed session 2. Feb 9 19:15:11.474464 systemd[1]: Started sshd@2-172.31.23.244:22-147.75.109.163:42206.service. Feb 9 19:15:11.644942 sshd[1859]: Accepted publickey for core from 147.75.109.163 port 42206 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:11.647901 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:11.655907 systemd-logind[1638]: New session 3 of user core. Feb 9 19:15:11.656732 systemd[1]: Started session-3.scope. Feb 9 19:15:11.778936 sshd[1859]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:11.785381 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:15:11.786521 systemd[1]: sshd@2-172.31.23.244:22-147.75.109.163:42206.service: Deactivated successfully. Feb 9 19:15:11.789404 systemd-logind[1638]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:15:11.791239 systemd-logind[1638]: Removed session 3. Feb 9 19:15:11.806250 systemd[1]: Started sshd@3-172.31.23.244:22-147.75.109.163:42222.service. Feb 9 19:15:11.980289 sshd[1865]: Accepted publickey for core from 147.75.109.163 port 42222 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:11.982141 sshd[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:11.990768 systemd[1]: Started session-4.scope. Feb 9 19:15:11.991738 systemd-logind[1638]: New session 4 of user core. Feb 9 19:15:12.119638 sshd[1865]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:12.124756 systemd-logind[1638]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:15:12.125366 systemd[1]: sshd@3-172.31.23.244:22-147.75.109.163:42222.service: Deactivated successfully. Feb 9 19:15:12.126566 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:15:12.128098 systemd-logind[1638]: Removed session 4. Feb 9 19:15:12.147989 systemd[1]: Started sshd@4-172.31.23.244:22-147.75.109.163:42228.service. Feb 9 19:15:12.322764 sshd[1871]: Accepted publickey for core from 147.75.109.163 port 42228 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:12.325695 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:12.333469 systemd-logind[1638]: New session 5 of user core. Feb 9 19:15:12.334334 systemd[1]: Started session-5.scope. Feb 9 19:15:12.451937 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:15:12.452938 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:15:13.133130 systemd[1]: Starting docker.service... Feb 9 19:15:13.209557 env[1889]: time="2024-02-09T19:15:13.209459820Z" level=info msg="Starting up" Feb 9 19:15:13.213820 env[1889]: time="2024-02-09T19:15:13.213746497Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:13.214069 env[1889]: time="2024-02-09T19:15:13.214038293Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:13.214213 env[1889]: time="2024-02-09T19:15:13.214178703Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:13.214322 env[1889]: time="2024-02-09T19:15:13.214294047Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:13.221666 env[1889]: time="2024-02-09T19:15:13.221620301Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:13.221887 env[1889]: time="2024-02-09T19:15:13.221843322Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:13.221957 env[1889]: time="2024-02-09T19:15:13.221897016Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:13.221957 env[1889]: time="2024-02-09T19:15:13.221923639Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:13.231293 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1064424823-merged.mount: Deactivated successfully. Feb 9 19:15:13.276104 env[1889]: time="2024-02-09T19:15:13.276054665Z" level=info msg="Loading containers: start." Feb 9 19:15:13.429998 kernel: Initializing XFRM netlink socket Feb 9 19:15:13.470173 env[1889]: time="2024-02-09T19:15:13.470126847Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:15:13.473051 (udev-worker)[1899]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:15:13.560331 systemd-networkd[1449]: docker0: Link UP Feb 9 19:15:13.580373 env[1889]: time="2024-02-09T19:15:13.580305992Z" level=info msg="Loading containers: done." Feb 9 19:15:13.600231 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2098365280-merged.mount: Deactivated successfully. Feb 9 19:15:13.613562 env[1889]: time="2024-02-09T19:15:13.613486851Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:15:13.614350 env[1889]: time="2024-02-09T19:15:13.614314665Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:15:13.614712 env[1889]: time="2024-02-09T19:15:13.614685546Z" level=info msg="Daemon has completed initialization" Feb 9 19:15:13.638742 systemd[1]: Started docker.service. Feb 9 19:15:13.648289 env[1889]: time="2024-02-09T19:15:13.648223087Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:15:13.679756 systemd[1]: Reloading. Feb 9 19:15:13.785784 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2024-02-09T19:15:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:13.785875 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2024-02-09T19:15:13Z" level=info msg="torcx already run" Feb 9 19:15:13.977196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:13.977459 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:14.015431 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:14.216367 systemd[1]: Started kubelet.service. Feb 9 19:15:14.348709 kubelet[2079]: E0209 19:15:14.348602 2079 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:15:14.353088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:14.353399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:14.785258 env[1649]: time="2024-02-09T19:15:14.785178385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 19:15:15.415751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079479368.mount: Deactivated successfully. Feb 9 19:15:17.756463 env[1649]: time="2024-02-09T19:15:17.756379454Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.760249 env[1649]: time="2024-02-09T19:15:17.760182210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.764178 env[1649]: time="2024-02-09T19:15:17.764110845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.768362 env[1649]: time="2024-02-09T19:15:17.768300677Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.770438 env[1649]: time="2024-02-09T19:15:17.770376246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb\"" Feb 9 19:15:17.787263 env[1649]: time="2024-02-09T19:15:17.787200815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 19:15:18.193610 amazon-ssm-agent[1622]: 2024-02-09 19:15:18 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:15:20.567718 env[1649]: time="2024-02-09T19:15:20.567657510Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.570601 env[1649]: time="2024-02-09T19:15:20.570552423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.573891 env[1649]: time="2024-02-09T19:15:20.573828022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.576854 env[1649]: time="2024-02-09T19:15:20.576774973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.578645 env[1649]: time="2024-02-09T19:15:20.578593077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f\"" Feb 9 19:15:20.595879 env[1649]: time="2024-02-09T19:15:20.595828517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 19:15:22.268173 env[1649]: time="2024-02-09T19:15:22.268094441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:22.270973 env[1649]: time="2024-02-09T19:15:22.270891173Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:22.274665 env[1649]: time="2024-02-09T19:15:22.274617950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:22.278901 env[1649]: time="2024-02-09T19:15:22.278838157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:22.280907 env[1649]: time="2024-02-09T19:15:22.280840265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663\"" Feb 9 19:15:22.297298 env[1649]: time="2024-02-09T19:15:22.297249580Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 19:15:23.593959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032600763.mount: Deactivated successfully. Feb 9 19:15:24.377066 env[1649]: time="2024-02-09T19:15:24.376983896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.381393 env[1649]: time="2024-02-09T19:15:24.380704763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.383430 env[1649]: time="2024-02-09T19:15:24.383356941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.386250 env[1649]: time="2024-02-09T19:15:24.385698418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.386658 env[1649]: time="2024-02-09T19:15:24.386612688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 19:15:24.404508 env[1649]: time="2024-02-09T19:15:24.404452860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:15:24.457756 amazon-ssm-agent[1622]: 2024-02-09 19:15:24 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:15:24.528204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:15:24.528548 systemd[1]: Stopped kubelet.service. Feb 9 19:15:24.531215 systemd[1]: Started kubelet.service. Feb 9 19:15:24.622882 kubelet[2118]: E0209 19:15:24.622757 2118 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:15:24.631168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:24.631485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:24.889732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906119179.mount: Deactivated successfully. Feb 9 19:15:24.900413 env[1649]: time="2024-02-09T19:15:24.900358392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.903778 env[1649]: time="2024-02-09T19:15:24.903732387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.906430 env[1649]: time="2024-02-09T19:15:24.906385394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.909974 env[1649]: time="2024-02-09T19:15:24.909899748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.912063 env[1649]: time="2024-02-09T19:15:24.911934862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 19:15:24.930768 env[1649]: time="2024-02-09T19:15:24.930666279Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 19:15:25.477464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065615410.mount: Deactivated successfully. Feb 9 19:15:30.152595 env[1649]: time="2024-02-09T19:15:30.152534323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:30.155711 env[1649]: time="2024-02-09T19:15:30.155662895Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:30.159298 env[1649]: time="2024-02-09T19:15:30.159234260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:30.162971 env[1649]: time="2024-02-09T19:15:30.162923421Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:30.164681 env[1649]: time="2024-02-09T19:15:30.164620103Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Feb 9 19:15:30.180831 env[1649]: time="2024-02-09T19:15:30.180741261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 19:15:30.647070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488096537.mount: Deactivated successfully. Feb 9 19:15:31.571119 env[1649]: time="2024-02-09T19:15:31.571062544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:31.574409 env[1649]: time="2024-02-09T19:15:31.574362790Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:31.577105 env[1649]: time="2024-02-09T19:15:31.577044042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:31.579924 env[1649]: time="2024-02-09T19:15:31.579864104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:31.581127 env[1649]: time="2024-02-09T19:15:31.581084441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 19:15:31.721954 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:15:34.778193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:15:34.778550 systemd[1]: Stopped kubelet.service. Feb 9 19:15:34.781279 systemd[1]: Started kubelet.service. Feb 9 19:15:34.874738 kubelet[2200]: E0209 19:15:34.874675 2200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:15:34.878458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:34.878816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:39.088218 systemd[1]: Stopped kubelet.service. Feb 9 19:15:39.116013 systemd[1]: Reloading. Feb 9 19:15:39.237723 /usr/lib/systemd/system-generators/torcx-generator[2229]: time="2024-02-09T19:15:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:39.238350 /usr/lib/systemd/system-generators/torcx-generator[2229]: time="2024-02-09T19:15:39Z" level=info msg="torcx already run" Feb 9 19:15:39.392751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:39.392832 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:39.431004 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:39.632269 systemd[1]: Started kubelet.service. Feb 9 19:15:39.717949 kubelet[2286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:39.717949 kubelet[2286]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:39.717949 kubelet[2286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:39.717949 kubelet[2286]: I0209 19:15:39.717740 2286 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:40.997749 kubelet[2286]: I0209 19:15:40.997710 2286 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:15:40.998389 kubelet[2286]: I0209 19:15:40.998352 2286 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:40.998861 kubelet[2286]: I0209 19:15:40.998837 2286 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:15:41.006491 kubelet[2286]: E0209 19:15:41.006441 2286 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.244:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.006665 kubelet[2286]: I0209 19:15:41.006511 2286 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:41.013959 kubelet[2286]: W0209 19:15:41.013916 2286 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:41.015073 kubelet[2286]: I0209 19:15:41.015042 2286 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:41.015549 kubelet[2286]: I0209 19:15:41.015521 2286 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:41.015844 kubelet[2286]: I0209 19:15:41.015775 2286 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:15:41.016007 kubelet[2286]: I0209 19:15:41.015859 2286 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:15:41.016007 kubelet[2286]: I0209 19:15:41.015883 2286 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:15:41.016169 kubelet[2286]: I0209 19:15:41.016048 2286 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:41.016255 kubelet[2286]: I0209 19:15:41.016227 2286 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:15:41.016255 kubelet[2286]: I0209 19:15:41.016254 2286 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:41.016375 kubelet[2286]: I0209 19:15:41.016289 2286 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:15:41.016375 kubelet[2286]: I0209 19:15:41.016318 2286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:41.018174 kubelet[2286]: I0209 19:15:41.018121 2286 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:41.018651 kubelet[2286]: W0209 19:15:41.018609 2286 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:15:41.019619 kubelet[2286]: I0209 19:15:41.019573 2286 server.go:1232] "Started kubelet" Feb 9 19:15:41.019876 kubelet[2286]: W0209 19:15:41.019772 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.23.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-244&limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.019954 kubelet[2286]: E0209 19:15:41.019900 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-244&limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.020062 kubelet[2286]: W0209 19:15:41.020010 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.23.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.020138 kubelet[2286]: E0209 19:15:41.020071 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.026631 kubelet[2286]: I0209 19:15:41.026584 2286 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:41.029708 kubelet[2286]: I0209 19:15:41.029662 2286 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:15:41.033008 kubelet[2286]: I0209 19:15:41.032965 2286 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:15:41.033599 kubelet[2286]: I0209 19:15:41.033570 2286 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:15:41.034968 kubelet[2286]: E0209 19:15:41.034813 2286 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-23-244.17b247c9db448d5d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-23-244", UID:"ip-172-31-23-244", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-23-244"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 19540829, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 19540829, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-23-244"}': 'Post "https://172.31.23.244:6443/api/v1/namespaces/default/events": dial tcp 172.31.23.244:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:15:41.035985 kubelet[2286]: E0209 19:15:41.035959 2286 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:41.036140 kubelet[2286]: E0209 19:15:41.036118 2286 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:41.037223 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:15:41.037926 kubelet[2286]: I0209 19:15:41.037880 2286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:41.041893 kubelet[2286]: I0209 19:15:41.041850 2286 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:15:41.042314 kubelet[2286]: I0209 19:15:41.042288 2286 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:41.042562 kubelet[2286]: I0209 19:15:41.042541 2286 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:15:41.043255 kubelet[2286]: W0209 19:15:41.043200 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.23.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.043452 kubelet[2286]: E0209 19:15:41.043431 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.044652 kubelet[2286]: E0209 19:15:41.044620 2286 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-244?timeout=10s\": dial tcp 172.31.23.244:6443: connect: connection refused" interval="200ms" Feb 9 19:15:41.097588 kubelet[2286]: I0209 19:15:41.097553 2286 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:41.097841 kubelet[2286]: I0209 19:15:41.097783 2286 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:41.097984 kubelet[2286]: I0209 19:15:41.097963 2286 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:41.100574 kubelet[2286]: I0209 19:15:41.100538 2286 policy_none.go:49] "None policy: Start" Feb 9 19:15:41.102404 kubelet[2286]: I0209 19:15:41.102370 2286 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:41.102576 kubelet[2286]: I0209 19:15:41.102555 2286 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:41.115077 systemd[1]: Created slice kubepods.slice. Feb 9 19:15:41.123835 kubelet[2286]: I0209 19:15:41.123784 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:15:41.125616 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:15:41.129141 kubelet[2286]: I0209 19:15:41.129109 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:15:41.129616 kubelet[2286]: I0209 19:15:41.129593 2286 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:15:41.129758 kubelet[2286]: I0209 19:15:41.129737 2286 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:15:41.131933 kubelet[2286]: E0209 19:15:41.130865 2286 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:15:41.133686 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:15:41.140132 kubelet[2286]: W0209 19:15:41.140088 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.23.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.141053 kubelet[2286]: E0209 19:15:41.141016 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:41.145049 kubelet[2286]: I0209 19:15:41.145000 2286 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:41.149281 kubelet[2286]: I0209 19:15:41.146940 2286 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-244" Feb 9 19:15:41.151373 kubelet[2286]: I0209 19:15:41.150691 2286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:41.151957 kubelet[2286]: E0209 19:15:41.151929 2286 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-244\" not found" Feb 9 19:15:41.152556 kubelet[2286]: E0209 19:15:41.152209 2286 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.244:6443/api/v1/nodes\": dial tcp 172.31.23.244:6443: connect: connection refused" node="ip-172-31-23-244" Feb 9 19:15:41.235049 kubelet[2286]: I0209 19:15:41.235013 2286 topology_manager.go:215] "Topology Admit Handler" podUID="f720e598f84c12b7b9906d62e6647651" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-244" Feb 9 19:15:41.237190 kubelet[2286]: I0209 19:15:41.237157 2286 topology_manager.go:215] "Topology Admit Handler" podUID="d709d948f3e69a1d06debfd45e53bdad" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-244" Feb 9 19:15:41.240649 kubelet[2286]: I0209 19:15:41.240618 2286 topology_manager.go:215] "Topology Admit Handler" podUID="f77ffd354c80c1a060eb8232005178f7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:41.247065 kubelet[2286]: E0209 19:15:41.246617 2286 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-244?timeout=10s\": dial tcp 172.31.23.244:6443: connect: connection refused" interval="400ms" Feb 9 19:15:41.249399 systemd[1]: Created slice kubepods-burstable-podf720e598f84c12b7b9906d62e6647651.slice. Feb 9 19:15:41.268395 systemd[1]: Created slice kubepods-burstable-podd709d948f3e69a1d06debfd45e53bdad.slice. Feb 9 19:15:41.282071 systemd[1]: Created slice kubepods-burstable-podf77ffd354c80c1a060eb8232005178f7.slice. Feb 9 19:15:41.343146 kubelet[2286]: I0209 19:15:41.343093 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:41.343298 kubelet[2286]: I0209 19:15:41.343186 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:41.343298 kubelet[2286]: I0209 19:15:41.343262 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:41.343460 kubelet[2286]: I0209 19:15:41.343339 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:41.343460 kubelet[2286]: I0209 19:15:41.343410 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d709d948f3e69a1d06debfd45e53bdad-ca-certs\") pod \"kube-apiserver-ip-172-31-23-244\" (UID: \"d709d948f3e69a1d06debfd45e53bdad\") " pod="kube-system/kube-apiserver-ip-172-31-23-244" Feb 9 19:15:41.343607 kubelet[2286]: I0209 19:15:41.343513 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d709d948f3e69a1d06debfd45e53bdad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-244\" (UID: \"d709d948f3e69a1d06debfd45e53bdad\") " pod="kube-system/kube-apiserver-ip-172-31-23-244" Feb 9 19:15:41.343607 kubelet[2286]: I0209 19:15:41.343592 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:41.343738 kubelet[2286]: I0209 19:15:41.343639 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f720e598f84c12b7b9906d62e6647651-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-244\" (UID: \"f720e598f84c12b7b9906d62e6647651\") " pod="kube-system/kube-scheduler-ip-172-31-23-244" Feb 9 19:15:41.343738 kubelet[2286]: I0209 19:15:41.343712 2286 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d709d948f3e69a1d06debfd45e53bdad-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-244\" (UID: \"d709d948f3e69a1d06debfd45e53bdad\") " pod="kube-system/kube-apiserver-ip-172-31-23-244" Feb 9 19:15:41.355250 kubelet[2286]: I0209 19:15:41.355211 2286 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-244" Feb 9 19:15:41.355883 kubelet[2286]: E0209 19:15:41.355855 2286 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.244:6443/api/v1/nodes\": dial tcp 172.31.23.244:6443: connect: connection refused" node="ip-172-31-23-244" Feb 9 19:15:41.563726 env[1649]: time="2024-02-09T19:15:41.563615259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-244,Uid:f720e598f84c12b7b9906d62e6647651,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:41.578647 env[1649]: time="2024-02-09T19:15:41.578583134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-244,Uid:d709d948f3e69a1d06debfd45e53bdad,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:41.590167 env[1649]: time="2024-02-09T19:15:41.590114809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-244,Uid:f77ffd354c80c1a060eb8232005178f7,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:41.648143 kubelet[2286]: E0209 19:15:41.648092 2286 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-244?timeout=10s\": dial tcp 172.31.23.244:6443: connect: connection refused" interval="800ms" Feb 9 19:15:41.758834 kubelet[2286]: I0209 19:15:41.758605 2286 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-244" Feb 9 19:15:41.759108 kubelet[2286]: E0209 19:15:41.759081 2286 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.244:6443/api/v1/nodes\": dial tcp 172.31.23.244:6443: connect: connection refused" node="ip-172-31-23-244" Feb 9 19:15:42.055407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251146336.mount: Deactivated successfully. Feb 9 19:15:42.064599 env[1649]: time="2024-02-09T19:15:42.064526587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.072956 env[1649]: time="2024-02-09T19:15:42.072902513Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.075113 env[1649]: time="2024-02-09T19:15:42.075070442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.077076 env[1649]: time="2024-02-09T19:15:42.077031893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.078608 env[1649]: time="2024-02-09T19:15:42.078565729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.081044 env[1649]: time="2024-02-09T19:15:42.081001757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.084360 env[1649]: time="2024-02-09T19:15:42.084314334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.085917 env[1649]: time="2024-02-09T19:15:42.085873656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.091715 env[1649]: time="2024-02-09T19:15:42.091644230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.094034 env[1649]: time="2024-02-09T19:15:42.093968087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.097961 env[1649]: time="2024-02-09T19:15:42.097905995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.104399 env[1649]: time="2024-02-09T19:15:42.104334875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.130254 env[1649]: time="2024-02-09T19:15:42.130111175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:42.130517 env[1649]: time="2024-02-09T19:15:42.130213430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:42.130517 env[1649]: time="2024-02-09T19:15:42.130248363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:42.130879 env[1649]: time="2024-02-09T19:15:42.130585776Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c984148ee2c585f5ebb8d6a5c4750ee1fa3220f6e710f197e14271cc695fbe7 pid=2325 runtime=io.containerd.runc.v2 Feb 9 19:15:42.160536 kubelet[2286]: W0209 19:15:42.160452 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.23.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.161150 kubelet[2286]: E0209 19:15:42.160540 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.175464 systemd[1]: Started cri-containerd-3c984148ee2c585f5ebb8d6a5c4750ee1fa3220f6e710f197e14271cc695fbe7.scope. Feb 9 19:15:42.190723 env[1649]: time="2024-02-09T19:15:42.188123484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:42.190723 env[1649]: time="2024-02-09T19:15:42.188224298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:42.190723 env[1649]: time="2024-02-09T19:15:42.188250720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:42.190723 env[1649]: time="2024-02-09T19:15:42.188588397Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e5524970e219bd3b7e1a71584913bc8315624e0d129d48dfd563e9bff04a787 pid=2364 runtime=io.containerd.runc.v2 Feb 9 19:15:42.193541 env[1649]: time="2024-02-09T19:15:42.193203314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:42.193541 env[1649]: time="2024-02-09T19:15:42.193277610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:42.193541 env[1649]: time="2024-02-09T19:15:42.193303732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:42.193945 env[1649]: time="2024-02-09T19:15:42.193780398Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29b37ee9c503f013d4e8a1e1329ac3d9c2b00be3e335ad18f662bc010b13fa46 pid=2352 runtime=io.containerd.runc.v2 Feb 9 19:15:42.219924 systemd[1]: Started cri-containerd-1e5524970e219bd3b7e1a71584913bc8315624e0d129d48dfd563e9bff04a787.scope. Feb 9 19:15:42.275124 systemd[1]: Started cri-containerd-29b37ee9c503f013d4e8a1e1329ac3d9c2b00be3e335ad18f662bc010b13fa46.scope. Feb 9 19:15:42.324151 env[1649]: time="2024-02-09T19:15:42.323992522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-244,Uid:f720e598f84c12b7b9906d62e6647651,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c984148ee2c585f5ebb8d6a5c4750ee1fa3220f6e710f197e14271cc695fbe7\"" Feb 9 19:15:42.332308 env[1649]: time="2024-02-09T19:15:42.332237898Z" level=info msg="CreateContainer within sandbox \"3c984148ee2c585f5ebb8d6a5c4750ee1fa3220f6e710f197e14271cc695fbe7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:15:42.374233 env[1649]: time="2024-02-09T19:15:42.374141764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-244,Uid:d709d948f3e69a1d06debfd45e53bdad,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e5524970e219bd3b7e1a71584913bc8315624e0d129d48dfd563e9bff04a787\"" Feb 9 19:15:42.379317 env[1649]: time="2024-02-09T19:15:42.379232098Z" level=info msg="CreateContainer within sandbox \"3c984148ee2c585f5ebb8d6a5c4750ee1fa3220f6e710f197e14271cc695fbe7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873\"" Feb 9 19:15:42.391540 env[1649]: time="2024-02-09T19:15:42.391458731Z" level=info msg="StartContainer for \"4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873\"" Feb 9 19:15:42.402672 env[1649]: time="2024-02-09T19:15:42.401604394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-244,Uid:f77ffd354c80c1a060eb8232005178f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"29b37ee9c503f013d4e8a1e1329ac3d9c2b00be3e335ad18f662bc010b13fa46\"" Feb 9 19:15:42.404768 env[1649]: time="2024-02-09T19:15:42.404659090Z" level=info msg="CreateContainer within sandbox \"1e5524970e219bd3b7e1a71584913bc8315624e0d129d48dfd563e9bff04a787\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:15:42.429403 env[1649]: time="2024-02-09T19:15:42.429323923Z" level=info msg="CreateContainer within sandbox \"29b37ee9c503f013d4e8a1e1329ac3d9c2b00be3e335ad18f662bc010b13fa46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:15:42.438478 env[1649]: time="2024-02-09T19:15:42.437204288Z" level=info msg="CreateContainer within sandbox \"1e5524970e219bd3b7e1a71584913bc8315624e0d129d48dfd563e9bff04a787\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe8f320adab5d1af3f0ad8620f2f9080a1945f503c36085475b34e0a8601d222\"" Feb 9 19:15:42.440939 systemd[1]: Started cri-containerd-4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873.scope. Feb 9 19:15:42.449278 kubelet[2286]: E0209 19:15:42.449205 2286 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-244?timeout=10s\": dial tcp 172.31.23.244:6443: connect: connection refused" interval="1.6s" Feb 9 19:15:42.450394 env[1649]: time="2024-02-09T19:15:42.450331647Z" level=info msg="StartContainer for \"fe8f320adab5d1af3f0ad8620f2f9080a1945f503c36085475b34e0a8601d222\"" Feb 9 19:15:42.472126 env[1649]: time="2024-02-09T19:15:42.472061578Z" level=info msg="CreateContainer within sandbox \"29b37ee9c503f013d4e8a1e1329ac3d9c2b00be3e335ad18f662bc010b13fa46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368\"" Feb 9 19:15:42.474501 env[1649]: time="2024-02-09T19:15:42.474432109Z" level=info msg="StartContainer for \"0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368\"" Feb 9 19:15:42.483113 kubelet[2286]: W0209 19:15:42.482672 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.23.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.483113 kubelet[2286]: E0209 19:15:42.482825 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.483113 kubelet[2286]: W0209 19:15:42.482990 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.23.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.483113 kubelet[2286]: E0209 19:15:42.483050 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.524150 systemd[1]: Started cri-containerd-fe8f320adab5d1af3f0ad8620f2f9080a1945f503c36085475b34e0a8601d222.scope. Feb 9 19:15:42.548387 systemd[1]: Started cri-containerd-0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368.scope. Feb 9 19:15:42.564806 kubelet[2286]: I0209 19:15:42.564746 2286 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-244" Feb 9 19:15:42.565420 kubelet[2286]: E0209 19:15:42.565376 2286 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.244:6443/api/v1/nodes\": dial tcp 172.31.23.244:6443: connect: connection refused" node="ip-172-31-23-244" Feb 9 19:15:42.571151 kubelet[2286]: W0209 19:15:42.570975 2286 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.23.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-244&limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.571151 kubelet[2286]: E0209 19:15:42.571072 2286 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-244&limit=500&resourceVersion=0": dial tcp 172.31.23.244:6443: connect: connection refused Feb 9 19:15:42.592202 env[1649]: time="2024-02-09T19:15:42.592053070Z" level=info msg="StartContainer for \"4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873\" returns successfully" Feb 9 19:15:42.668219 env[1649]: time="2024-02-09T19:15:42.668141070Z" level=info msg="StartContainer for \"fe8f320adab5d1af3f0ad8620f2f9080a1945f503c36085475b34e0a8601d222\" returns successfully" Feb 9 19:15:42.683204 env[1649]: time="2024-02-09T19:15:42.683139306Z" level=info msg="StartContainer for \"0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368\" returns successfully" Feb 9 19:15:44.167829 kubelet[2286]: I0209 19:15:44.167757 2286 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-244" Feb 9 19:15:46.461021 kubelet[2286]: E0209 19:15:46.460981 2286 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-244\" not found" node="ip-172-31-23-244" Feb 9 19:15:46.468932 update_engine[1640]: I0209 19:15:46.468865 1640 update_attempter.cc:509] Updating boot flags... Feb 9 19:15:46.531617 kubelet[2286]: I0209 19:15:46.531401 2286 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-23-244" Feb 9 19:15:46.578384 kubelet[2286]: E0209 19:15:46.578040 2286 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-23-244.17b247c9db448d5d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-23-244", UID:"ip-172-31-23-244", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-23-244"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 19540829, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 19540829, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-23-244"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:46.674067 kubelet[2286]: E0209 19:15:46.673571 2286 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-23-244.17b247c9dc413c97", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-23-244", UID:"ip-172-31-23-244", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-23-244"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 36100759, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 36100759, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-23-244"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:46.800867 kubelet[2286]: E0209 19:15:46.800122 2286 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-23-244.17b247c9dfd9a409", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-23-244", UID:"ip-172-31-23-244", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-31-23-244 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-23-244"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 96420361, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 96420361, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-23-244"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:47.020863 kubelet[2286]: I0209 19:15:47.020816 2286 apiserver.go:52] "Watching apiserver" Feb 9 19:15:47.043066 kubelet[2286]: I0209 19:15:47.043010 2286 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:49.284595 systemd[1]: Reloading. Feb 9 19:15:49.420743 /usr/lib/systemd/system-generators/torcx-generator[2671]: time="2024-02-09T19:15:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:49.421778 /usr/lib/systemd/system-generators/torcx-generator[2671]: time="2024-02-09T19:15:49Z" level=info msg="torcx already run" Feb 9 19:15:49.596357 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:49.596396 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:49.638606 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:49.898214 systemd[1]: Stopping kubelet.service... Feb 9 19:15:49.918889 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:15:49.919309 systemd[1]: Stopped kubelet.service. Feb 9 19:15:49.919392 systemd[1]: kubelet.service: Consumed 1.954s CPU time. Feb 9 19:15:49.924241 systemd[1]: Started kubelet.service. Feb 9 19:15:50.050241 sudo[2737]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:15:50.051508 sudo[2737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:15:50.066521 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:50.066521 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:50.067066 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:50.067066 kubelet[2727]: I0209 19:15:50.066651 2727 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:50.076412 kubelet[2727]: I0209 19:15:50.076359 2727 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:15:50.076412 kubelet[2727]: I0209 19:15:50.076405 2727 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:50.077426 kubelet[2727]: I0209 19:15:50.077373 2727 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:15:50.087150 kubelet[2727]: I0209 19:15:50.087093 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:15:50.089890 kubelet[2727]: I0209 19:15:50.089837 2727 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:50.107948 kubelet[2727]: W0209 19:15:50.107898 2727 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:50.109153 kubelet[2727]: I0209 19:15:50.109111 2727 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:50.109525 kubelet[2727]: I0209 19:15:50.109488 2727 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:50.109809 kubelet[2727]: I0209 19:15:50.109764 2727 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:15:50.109976 kubelet[2727]: I0209 19:15:50.109822 2727 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:15:50.109976 kubelet[2727]: I0209 19:15:50.109844 2727 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:15:50.109976 kubelet[2727]: I0209 19:15:50.109892 2727 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:50.110171 kubelet[2727]: I0209 19:15:50.110036 2727 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:15:50.110171 kubelet[2727]: I0209 19:15:50.110062 2727 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:50.110171 kubelet[2727]: I0209 19:15:50.110094 2727 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:15:50.110171 kubelet[2727]: I0209 19:15:50.110123 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:50.118841 kubelet[2727]: I0209 19:15:50.116052 2727 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:50.118841 kubelet[2727]: I0209 19:15:50.116974 2727 server.go:1232] "Started kubelet" Feb 9 19:15:50.120020 kubelet[2727]: I0209 19:15:50.119980 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:50.128210 kubelet[2727]: E0209 19:15:50.128158 2727 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:50.128210 kubelet[2727]: E0209 19:15:50.128213 2727 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:50.138227 kubelet[2727]: I0209 19:15:50.138174 2727 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:50.139394 kubelet[2727]: I0209 19:15:50.139346 2727 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:15:50.141358 kubelet[2727]: I0209 19:15:50.141300 2727 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:15:50.141646 kubelet[2727]: I0209 19:15:50.141605 2727 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:15:50.146197 kubelet[2727]: I0209 19:15:50.146131 2727 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:15:50.147023 kubelet[2727]: I0209 19:15:50.146975 2727 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:50.147295 kubelet[2727]: I0209 19:15:50.147260 2727 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:15:50.244295 kubelet[2727]: I0209 19:15:50.243582 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:15:50.253170 kubelet[2727]: I0209 19:15:50.246389 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:15:50.253170 kubelet[2727]: I0209 19:15:50.246451 2727 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:15:50.253170 kubelet[2727]: I0209 19:15:50.246483 2727 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:15:50.253170 kubelet[2727]: E0209 19:15:50.246581 2727 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:15:50.295236 kubelet[2727]: I0209 19:15:50.295185 2727 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-244" Feb 9 19:15:50.320038 kubelet[2727]: I0209 19:15:50.319754 2727 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-23-244" Feb 9 19:15:50.320038 kubelet[2727]: I0209 19:15:50.319899 2727 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-23-244" Feb 9 19:15:50.346937 kubelet[2727]: E0209 19:15:50.346860 2727 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 19:15:50.390545 kubelet[2727]: I0209 19:15:50.390493 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:50.390545 kubelet[2727]: I0209 19:15:50.390536 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:50.390765 kubelet[2727]: I0209 19:15:50.390572 2727 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:50.390931 kubelet[2727]: I0209 19:15:50.390897 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:15:50.391002 kubelet[2727]: I0209 19:15:50.390950 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 19:15:50.391002 kubelet[2727]: I0209 19:15:50.390968 2727 policy_none.go:49] "None policy: Start" Feb 9 19:15:50.392550 kubelet[2727]: I0209 19:15:50.392503 2727 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:50.392681 kubelet[2727]: I0209 19:15:50.392558 2727 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:50.392936 kubelet[2727]: I0209 19:15:50.392901 2727 state_mem.go:75] "Updated machine memory state" Feb 9 19:15:50.400210 kubelet[2727]: I0209 19:15:50.400159 2727 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:50.403620 kubelet[2727]: I0209 19:15:50.403549 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:50.547149 kubelet[2727]: I0209 19:15:50.547084 2727 topology_manager.go:215] "Topology Admit Handler" podUID="d709d948f3e69a1d06debfd45e53bdad" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-244" Feb 9 19:15:50.547326 kubelet[2727]: I0209 19:15:50.547254 2727 topology_manager.go:215] "Topology Admit Handler" podUID="f77ffd354c80c1a060eb8232005178f7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:50.547472 kubelet[2727]: I0209 19:15:50.547332 2727 topology_manager.go:215] "Topology Admit Handler" podUID="f720e598f84c12b7b9906d62e6647651" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-244" Feb 9 19:15:50.569110 kubelet[2727]: I0209 19:15:50.569047 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d709d948f3e69a1d06debfd45e53bdad-ca-certs\") pod \"kube-apiserver-ip-172-31-23-244\" (UID: \"d709d948f3e69a1d06debfd45e53bdad\") " pod="kube-system/kube-apiserver-ip-172-31-23-244" Feb 9 19:15:50.569399 kubelet[2727]: I0209 19:15:50.569357 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:50.569603 kubelet[2727]: I0209 19:15:50.569564 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:50.569841 kubelet[2727]: I0209 19:15:50.569817 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:50.570047 kubelet[2727]: I0209 19:15:50.570016 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f720e598f84c12b7b9906d62e6647651-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-244\" (UID: \"f720e598f84c12b7b9906d62e6647651\") " pod="kube-system/kube-scheduler-ip-172-31-23-244" Feb 9 19:15:50.570217 kubelet[2727]: I0209 19:15:50.570185 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d709d948f3e69a1d06debfd45e53bdad-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-244\" (UID: \"d709d948f3e69a1d06debfd45e53bdad\") " pod="kube-system/kube-apiserver-ip-172-31-23-244" Feb 9 19:15:50.570387 kubelet[2727]: I0209 19:15:50.570355 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d709d948f3e69a1d06debfd45e53bdad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-244\" (UID: \"d709d948f3e69a1d06debfd45e53bdad\") " pod="kube-system/kube-apiserver-ip-172-31-23-244" Feb 9 19:15:50.570589 kubelet[2727]: I0209 19:15:50.570551 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:50.570880 kubelet[2727]: I0209 19:15:50.570857 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f77ffd354c80c1a060eb8232005178f7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-244\" (UID: \"f77ffd354c80c1a060eb8232005178f7\") " pod="kube-system/kube-controller-manager-ip-172-31-23-244" Feb 9 19:15:51.124866 kubelet[2727]: I0209 19:15:51.124822 2727 apiserver.go:52] "Watching apiserver" Feb 9 19:15:51.147528 kubelet[2727]: I0209 19:15:51.147488 2727 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:51.181711 sudo[2737]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:51.341610 kubelet[2727]: E0209 19:15:51.341573 2727 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-244\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-244" Feb 9 19:15:51.372298 kubelet[2727]: I0209 19:15:51.372243 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-244" podStartSLOduration=1.372063069 podCreationTimestamp="2024-02-09 19:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:51.371960456 +0000 UTC m=+1.438094988" watchObservedRunningTime="2024-02-09 19:15:51.372063069 +0000 UTC m=+1.438197589" Feb 9 19:15:51.413859 kubelet[2727]: I0209 19:15:51.413689 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-244" podStartSLOduration=1.413636941 podCreationTimestamp="2024-02-09 19:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:51.402731059 +0000 UTC m=+1.468865591" watchObservedRunningTime="2024-02-09 19:15:51.413636941 +0000 UTC m=+1.479771461" Feb 9 19:15:51.433517 kubelet[2727]: I0209 19:15:51.433452 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-244" podStartSLOduration=1.433395161 podCreationTimestamp="2024-02-09 19:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:51.416209432 +0000 UTC m=+1.482343976" watchObservedRunningTime="2024-02-09 19:15:51.433395161 +0000 UTC m=+1.499529741" Feb 9 19:15:53.306254 sudo[1874]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:53.332241 sshd[1871]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:53.341972 systemd[1]: sshd@4-172.31.23.244:22-147.75.109.163:42228.service: Deactivated successfully. Feb 9 19:15:53.343327 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:15:53.343668 systemd[1]: session-5.scope: Consumed 10.614s CPU time. Feb 9 19:15:53.345495 systemd-logind[1638]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:15:53.348044 systemd-logind[1638]: Removed session 5. Feb 9 19:15:54.485042 amazon-ssm-agent[1622]: 2024-02-09 19:15:54 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:16:03.472765 kubelet[2727]: I0209 19:16:03.472731 2727 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:16:03.474341 env[1649]: time="2024-02-09T19:16:03.474272012Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:16:03.474941 kubelet[2727]: I0209 19:16:03.474864 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:16:03.722209 kubelet[2727]: I0209 19:16:03.722140 2727 topology_manager.go:215] "Topology Admit Handler" podUID="5c9c52b1-a6bf-4041-81be-c97505ed7f20" podNamespace="kube-system" podName="kube-proxy-cq294" Feb 9 19:16:03.734505 systemd[1]: Created slice kubepods-besteffort-pod5c9c52b1_a6bf_4041_81be_c97505ed7f20.slice. Feb 9 19:16:03.746247 kubelet[2727]: I0209 19:16:03.746198 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c9c52b1-a6bf-4041-81be-c97505ed7f20-kube-proxy\") pod \"kube-proxy-cq294\" (UID: \"5c9c52b1-a6bf-4041-81be-c97505ed7f20\") " pod="kube-system/kube-proxy-cq294" Feb 9 19:16:03.746422 kubelet[2727]: I0209 19:16:03.746296 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppz5d\" (UniqueName: \"kubernetes.io/projected/5c9c52b1-a6bf-4041-81be-c97505ed7f20-kube-api-access-ppz5d\") pod \"kube-proxy-cq294\" (UID: \"5c9c52b1-a6bf-4041-81be-c97505ed7f20\") " pod="kube-system/kube-proxy-cq294" Feb 9 19:16:03.746422 kubelet[2727]: I0209 19:16:03.746387 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c9c52b1-a6bf-4041-81be-c97505ed7f20-xtables-lock\") pod \"kube-proxy-cq294\" (UID: \"5c9c52b1-a6bf-4041-81be-c97505ed7f20\") " pod="kube-system/kube-proxy-cq294" Feb 9 19:16:03.746551 kubelet[2727]: I0209 19:16:03.746456 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c9c52b1-a6bf-4041-81be-c97505ed7f20-lib-modules\") pod \"kube-proxy-cq294\" (UID: \"5c9c52b1-a6bf-4041-81be-c97505ed7f20\") " pod="kube-system/kube-proxy-cq294" Feb 9 19:16:03.769176 kubelet[2727]: I0209 19:16:03.769110 2727 topology_manager.go:215] "Topology Admit Handler" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" podNamespace="kube-system" podName="cilium-rkz9f" Feb 9 19:16:03.780196 systemd[1]: Created slice kubepods-burstable-pod0dadab27_183d_4cf2_ae0c_d168e1026d98.slice. Feb 9 19:16:03.846916 kubelet[2727]: I0209 19:16:03.846838 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-run\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.847104 kubelet[2727]: I0209 19:16:03.847032 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-hostproc\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.847232 kubelet[2727]: I0209 19:16:03.847198 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-etc-cni-netd\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.847547 kubelet[2727]: I0209 19:16:03.847395 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-lib-modules\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.847881 kubelet[2727]: I0209 19:16:03.847854 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-cgroup\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.848079 kubelet[2727]: I0209 19:16:03.848053 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-kernel\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.848243 kubelet[2727]: I0209 19:16:03.848220 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hvx\" (UniqueName: \"kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-kube-api-access-q5hvx\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.848404 kubelet[2727]: I0209 19:16:03.848382 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0dadab27-183d-4cf2-ae0c-d168e1026d98-clustermesh-secrets\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.849467 kubelet[2727]: I0209 19:16:03.849362 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-hubble-tls\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.849620 kubelet[2727]: I0209 19:16:03.849526 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-xtables-lock\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.849620 kubelet[2727]: I0209 19:16:03.849604 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cni-path\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.849769 kubelet[2727]: I0209 19:16:03.849699 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-bpf-maps\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.849875 kubelet[2727]: I0209 19:16:03.849838 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-config-path\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.849961 kubelet[2727]: I0209 19:16:03.849915 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-net\") pod \"cilium-rkz9f\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " pod="kube-system/cilium-rkz9f" Feb 9 19:16:03.862596 kubelet[2727]: E0209 19:16:03.862549 2727 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 19:16:03.862777 kubelet[2727]: E0209 19:16:03.862624 2727 projected.go:198] Error preparing data for projected volume kube-api-access-ppz5d for pod kube-system/kube-proxy-cq294: configmap "kube-root-ca.crt" not found Feb 9 19:16:03.862777 kubelet[2727]: E0209 19:16:03.862742 2727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c9c52b1-a6bf-4041-81be-c97505ed7f20-kube-api-access-ppz5d podName:5c9c52b1-a6bf-4041-81be-c97505ed7f20 nodeName:}" failed. No retries permitted until 2024-02-09 19:16:04.362708266 +0000 UTC m=+14.428842786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ppz5d" (UniqueName: "kubernetes.io/projected/5c9c52b1-a6bf-4041-81be-c97505ed7f20-kube-api-access-ppz5d") pod "kube-proxy-cq294" (UID: "5c9c52b1-a6bf-4041-81be-c97505ed7f20") : configmap "kube-root-ca.crt" not found Feb 9 19:16:04.087978 env[1649]: time="2024-02-09T19:16:04.087399863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkz9f,Uid:0dadab27-183d-4cf2-ae0c-d168e1026d98,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:04.146119 env[1649]: time="2024-02-09T19:16:04.146005083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:04.146425 env[1649]: time="2024-02-09T19:16:04.146359915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:04.146617 env[1649]: time="2024-02-09T19:16:04.146555339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:04.147172 env[1649]: time="2024-02-09T19:16:04.147096185Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88 pid=2812 runtime=io.containerd.runc.v2 Feb 9 19:16:04.200484 systemd[1]: Started cri-containerd-67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88.scope. Feb 9 19:16:04.278942 env[1649]: time="2024-02-09T19:16:04.278877853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkz9f,Uid:0dadab27-183d-4cf2-ae0c-d168e1026d98,Namespace:kube-system,Attempt:0,} returns sandbox id \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\"" Feb 9 19:16:04.284694 env[1649]: time="2024-02-09T19:16:04.284596357Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:16:04.405721 kubelet[2727]: I0209 19:16:04.405553 2727 topology_manager.go:215] "Topology Admit Handler" podUID="a3e7f9c3-beda-4f45-940b-b4648e2bf14e" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-pgsw8" Feb 9 19:16:04.416328 systemd[1]: Created slice kubepods-besteffort-poda3e7f9c3_beda_4f45_940b_b4648e2bf14e.slice. Feb 9 19:16:04.463155 kubelet[2727]: I0209 19:16:04.463109 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7dt7\" (UniqueName: \"kubernetes.io/projected/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-kube-api-access-q7dt7\") pod \"cilium-operator-6bc8ccdb58-pgsw8\" (UID: \"a3e7f9c3-beda-4f45-940b-b4648e2bf14e\") " pod="kube-system/cilium-operator-6bc8ccdb58-pgsw8" Feb 9 19:16:04.463512 kubelet[2727]: I0209 19:16:04.463488 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-pgsw8\" (UID: \"a3e7f9c3-beda-4f45-940b-b4648e2bf14e\") " pod="kube-system/cilium-operator-6bc8ccdb58-pgsw8" Feb 9 19:16:04.647716 env[1649]: time="2024-02-09T19:16:04.647641452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cq294,Uid:5c9c52b1-a6bf-4041-81be-c97505ed7f20,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:04.673760 env[1649]: time="2024-02-09T19:16:04.673532454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:04.673760 env[1649]: time="2024-02-09T19:16:04.673618507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:04.674459 env[1649]: time="2024-02-09T19:16:04.674027718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:04.674773 env[1649]: time="2024-02-09T19:16:04.674695855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf7c4fd3ebf584479035626365041eda019018f0b5ae11ea9d250875fea44bde pid=2855 runtime=io.containerd.runc.v2 Feb 9 19:16:04.708057 systemd[1]: Started cri-containerd-bf7c4fd3ebf584479035626365041eda019018f0b5ae11ea9d250875fea44bde.scope. Feb 9 19:16:04.722702 env[1649]: time="2024-02-09T19:16:04.722634254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-pgsw8,Uid:a3e7f9c3-beda-4f45-940b-b4648e2bf14e,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:04.767397 env[1649]: time="2024-02-09T19:16:04.767203093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:04.767740 env[1649]: time="2024-02-09T19:16:04.767666769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:04.768045 env[1649]: time="2024-02-09T19:16:04.767965744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:04.770129 env[1649]: time="2024-02-09T19:16:04.770009652Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6 pid=2893 runtime=io.containerd.runc.v2 Feb 9 19:16:04.777170 env[1649]: time="2024-02-09T19:16:04.777101719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cq294,Uid:5c9c52b1-a6bf-4041-81be-c97505ed7f20,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf7c4fd3ebf584479035626365041eda019018f0b5ae11ea9d250875fea44bde\"" Feb 9 19:16:04.782766 env[1649]: time="2024-02-09T19:16:04.782365502Z" level=info msg="CreateContainer within sandbox \"bf7c4fd3ebf584479035626365041eda019018f0b5ae11ea9d250875fea44bde\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:16:04.810355 systemd[1]: Started cri-containerd-2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6.scope. Feb 9 19:16:04.822171 env[1649]: time="2024-02-09T19:16:04.820941849Z" level=info msg="CreateContainer within sandbox \"bf7c4fd3ebf584479035626365041eda019018f0b5ae11ea9d250875fea44bde\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"872c9e05ec15e421e8e5e221296dcbf62d70faefa77c6637ccd68ff444ced4b4\"" Feb 9 19:16:04.823841 env[1649]: time="2024-02-09T19:16:04.822766245Z" level=info msg="StartContainer for \"872c9e05ec15e421e8e5e221296dcbf62d70faefa77c6637ccd68ff444ced4b4\"" Feb 9 19:16:04.870416 systemd[1]: Started cri-containerd-872c9e05ec15e421e8e5e221296dcbf62d70faefa77c6637ccd68ff444ced4b4.scope. Feb 9 19:16:04.926691 env[1649]: time="2024-02-09T19:16:04.925966229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-pgsw8,Uid:a3e7f9c3-beda-4f45-940b-b4648e2bf14e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\"" Feb 9 19:16:04.995819 env[1649]: time="2024-02-09T19:16:04.995730785Z" level=info msg="StartContainer for \"872c9e05ec15e421e8e5e221296dcbf62d70faefa77c6637ccd68ff444ced4b4\" returns successfully" Feb 9 19:16:05.391537 kubelet[2727]: I0209 19:16:05.391476 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cq294" podStartSLOduration=2.391393704 podCreationTimestamp="2024-02-09 19:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:05.390940809 +0000 UTC m=+15.457075377" watchObservedRunningTime="2024-02-09 19:16:05.391393704 +0000 UTC m=+15.457528236" Feb 9 19:16:11.955415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019996481.mount: Deactivated successfully. Feb 9 19:16:15.848172 env[1649]: time="2024-02-09T19:16:15.848112911Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:15.852932 env[1649]: time="2024-02-09T19:16:15.852865585Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:15.858331 env[1649]: time="2024-02-09T19:16:15.858265465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:15.860876 env[1649]: time="2024-02-09T19:16:15.859958377Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 19:16:15.862263 env[1649]: time="2024-02-09T19:16:15.862205113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:16:15.865970 env[1649]: time="2024-02-09T19:16:15.865909212Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:16:15.886905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210788966.mount: Deactivated successfully. Feb 9 19:16:15.909150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978542217.mount: Deactivated successfully. Feb 9 19:16:15.922440 env[1649]: time="2024-02-09T19:16:15.922376406Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\"" Feb 9 19:16:15.923907 env[1649]: time="2024-02-09T19:16:15.923858780Z" level=info msg="StartContainer for \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\"" Feb 9 19:16:15.961099 systemd[1]: Started cri-containerd-7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce.scope. Feb 9 19:16:16.021868 env[1649]: time="2024-02-09T19:16:16.021764387Z" level=info msg="StartContainer for \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\" returns successfully" Feb 9 19:16:16.041073 systemd[1]: cri-containerd-7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce.scope: Deactivated successfully. Feb 9 19:16:16.439825 env[1649]: time="2024-02-09T19:16:16.439737081Z" level=info msg="shim disconnected" id=7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce Feb 9 19:16:16.440235 env[1649]: time="2024-02-09T19:16:16.440169606Z" level=warning msg="cleaning up after shim disconnected" id=7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce namespace=k8s.io Feb 9 19:16:16.440235 env[1649]: time="2024-02-09T19:16:16.440219808Z" level=info msg="cleaning up dead shim" Feb 9 19:16:16.458679 env[1649]: time="2024-02-09T19:16:16.458597000Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3141 runtime=io.containerd.runc.v2\n" Feb 9 19:16:16.882228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce-rootfs.mount: Deactivated successfully. Feb 9 19:16:17.277896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631782012.mount: Deactivated successfully. Feb 9 19:16:17.458324 env[1649]: time="2024-02-09T19:16:17.458208894Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:16:17.482633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795365384.mount: Deactivated successfully. Feb 9 19:16:17.490770 env[1649]: time="2024-02-09T19:16:17.490704828Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\"" Feb 9 19:16:17.495088 env[1649]: time="2024-02-09T19:16:17.495023457Z" level=info msg="StartContainer for \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\"" Feb 9 19:16:17.545065 systemd[1]: Started cri-containerd-b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad.scope. Feb 9 19:16:17.619754 env[1649]: time="2024-02-09T19:16:17.619627968Z" level=info msg="StartContainer for \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\" returns successfully" Feb 9 19:16:17.643101 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:16:17.645140 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:16:17.645645 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:16:17.655050 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:16:17.655747 systemd[1]: cri-containerd-b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad.scope: Deactivated successfully. Feb 9 19:16:17.677287 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:16:17.756368 env[1649]: time="2024-02-09T19:16:17.756304439Z" level=info msg="shim disconnected" id=b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad Feb 9 19:16:17.756738 env[1649]: time="2024-02-09T19:16:17.756691346Z" level=warning msg="cleaning up after shim disconnected" id=b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad namespace=k8s.io Feb 9 19:16:17.756909 env[1649]: time="2024-02-09T19:16:17.756879933Z" level=info msg="cleaning up dead shim" Feb 9 19:16:17.784770 env[1649]: time="2024-02-09T19:16:17.784713619Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3203 runtime=io.containerd.runc.v2\n" Feb 9 19:16:18.336766 env[1649]: time="2024-02-09T19:16:18.336705240Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:18.339683 env[1649]: time="2024-02-09T19:16:18.339637050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:18.342357 env[1649]: time="2024-02-09T19:16:18.342296648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:18.343673 env[1649]: time="2024-02-09T19:16:18.343596222Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 19:16:18.349585 env[1649]: time="2024-02-09T19:16:18.349532196Z" level=info msg="CreateContainer within sandbox \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:16:18.370866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269222305.mount: Deactivated successfully. Feb 9 19:16:18.383047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660772665.mount: Deactivated successfully. Feb 9 19:16:18.388106 env[1649]: time="2024-02-09T19:16:18.387999573Z" level=info msg="CreateContainer within sandbox \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\"" Feb 9 19:16:18.391502 env[1649]: time="2024-02-09T19:16:18.390608394Z" level=info msg="StartContainer for \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\"" Feb 9 19:16:18.422117 env[1649]: time="2024-02-09T19:16:18.422046932Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:16:18.456663 systemd[1]: Started cri-containerd-7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591.scope. Feb 9 19:16:18.473355 env[1649]: time="2024-02-09T19:16:18.473253441Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\"" Feb 9 19:16:18.475146 env[1649]: time="2024-02-09T19:16:18.475070567Z" level=info msg="StartContainer for \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\"" Feb 9 19:16:18.525378 systemd[1]: Started cri-containerd-8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70.scope. Feb 9 19:16:18.584038 env[1649]: time="2024-02-09T19:16:18.581748882Z" level=info msg="StartContainer for \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\" returns successfully" Feb 9 19:16:18.622096 env[1649]: time="2024-02-09T19:16:18.620356853Z" level=info msg="StartContainer for \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\" returns successfully" Feb 9 19:16:18.629695 systemd[1]: cri-containerd-8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70.scope: Deactivated successfully. Feb 9 19:16:18.765127 env[1649]: time="2024-02-09T19:16:18.765050305Z" level=info msg="shim disconnected" id=8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70 Feb 9 19:16:18.765127 env[1649]: time="2024-02-09T19:16:18.765121592Z" level=warning msg="cleaning up after shim disconnected" id=8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70 namespace=k8s.io Feb 9 19:16:18.765544 env[1649]: time="2024-02-09T19:16:18.765144310Z" level=info msg="cleaning up dead shim" Feb 9 19:16:18.794672 env[1649]: time="2024-02-09T19:16:18.794573391Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3302 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:16:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 19:16:19.440064 env[1649]: time="2024-02-09T19:16:19.440006803Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:16:19.472228 env[1649]: time="2024-02-09T19:16:19.472128220Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\"" Feb 9 19:16:19.473361 env[1649]: time="2024-02-09T19:16:19.473299247Z" level=info msg="StartContainer for \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\"" Feb 9 19:16:19.560193 systemd[1]: Started cri-containerd-8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081.scope. Feb 9 19:16:19.741107 env[1649]: time="2024-02-09T19:16:19.740964594Z" level=info msg="StartContainer for \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\" returns successfully" Feb 9 19:16:19.751275 systemd[1]: cri-containerd-8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081.scope: Deactivated successfully. Feb 9 19:16:19.819757 env[1649]: time="2024-02-09T19:16:19.819691074Z" level=info msg="shim disconnected" id=8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081 Feb 9 19:16:19.820181 env[1649]: time="2024-02-09T19:16:19.820140638Z" level=warning msg="cleaning up after shim disconnected" id=8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081 namespace=k8s.io Feb 9 19:16:19.820325 env[1649]: time="2024-02-09T19:16:19.820294890Z" level=info msg="cleaning up dead shim" Feb 9 19:16:19.841325 env[1649]: time="2024-02-09T19:16:19.841235613Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" Feb 9 19:16:19.885659 systemd[1]: run-containerd-runc-k8s.io-8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081-runc.nszgdP.mount: Deactivated successfully. Feb 9 19:16:19.885916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081-rootfs.mount: Deactivated successfully. Feb 9 19:16:20.452648 env[1649]: time="2024-02-09T19:16:20.452591499Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:16:20.485684 kubelet[2727]: I0209 19:16:20.485626 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-pgsw8" podStartSLOduration=3.070434067 podCreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="2024-02-09 19:16:04.928934559 +0000 UTC m=+14.995069067" lastFinishedPulling="2024-02-09 19:16:18.344072682 +0000 UTC m=+28.410207202" observedRunningTime="2024-02-09 19:16:19.759624494 +0000 UTC m=+29.825759014" watchObservedRunningTime="2024-02-09 19:16:20.485572202 +0000 UTC m=+30.551706818" Feb 9 19:16:20.490134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516904552.mount: Deactivated successfully. Feb 9 19:16:20.501642 env[1649]: time="2024-02-09T19:16:20.501570788Z" level=info msg="CreateContainer within sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\"" Feb 9 19:16:20.502857 env[1649]: time="2024-02-09T19:16:20.502779724Z" level=info msg="StartContainer for \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\"" Feb 9 19:16:20.566571 systemd[1]: Started cri-containerd-0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765.scope. Feb 9 19:16:20.653353 env[1649]: time="2024-02-09T19:16:20.653275765Z" level=info msg="StartContainer for \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\" returns successfully" Feb 9 19:16:20.864939 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:16:20.929730 kubelet[2727]: I0209 19:16:20.929493 2727 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:16:20.977980 kubelet[2727]: I0209 19:16:20.977924 2727 topology_manager.go:215] "Topology Admit Handler" podUID="de1930ed-0072-4756-b109-5b94cee67150" podNamespace="kube-system" podName="coredns-5dd5756b68-q7dxc" Feb 9 19:16:20.988425 systemd[1]: Created slice kubepods-burstable-podde1930ed_0072_4756_b109_5b94cee67150.slice. Feb 9 19:16:20.994829 kubelet[2727]: I0209 19:16:20.993940 2727 topology_manager.go:215] "Topology Admit Handler" podUID="ec17e1a9-2069-4d52-bb3f-bf72dc5b9edc" podNamespace="kube-system" podName="coredns-5dd5756b68-4ts9k" Feb 9 19:16:21.006929 systemd[1]: Created slice kubepods-burstable-podec17e1a9_2069_4d52_bb3f_bf72dc5b9edc.slice. Feb 9 19:16:21.087109 kubelet[2727]: I0209 19:16:21.087047 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92k59\" (UniqueName: \"kubernetes.io/projected/de1930ed-0072-4756-b109-5b94cee67150-kube-api-access-92k59\") pod \"coredns-5dd5756b68-q7dxc\" (UID: \"de1930ed-0072-4756-b109-5b94cee67150\") " pod="kube-system/coredns-5dd5756b68-q7dxc" Feb 9 19:16:21.087293 kubelet[2727]: I0209 19:16:21.087149 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec17e1a9-2069-4d52-bb3f-bf72dc5b9edc-config-volume\") pod \"coredns-5dd5756b68-4ts9k\" (UID: \"ec17e1a9-2069-4d52-bb3f-bf72dc5b9edc\") " pod="kube-system/coredns-5dd5756b68-4ts9k" Feb 9 19:16:21.087293 kubelet[2727]: I0209 19:16:21.087228 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv27r\" (UniqueName: \"kubernetes.io/projected/ec17e1a9-2069-4d52-bb3f-bf72dc5b9edc-kube-api-access-dv27r\") pod \"coredns-5dd5756b68-4ts9k\" (UID: \"ec17e1a9-2069-4d52-bb3f-bf72dc5b9edc\") " pod="kube-system/coredns-5dd5756b68-4ts9k" Feb 9 19:16:21.087419 kubelet[2727]: I0209 19:16:21.087306 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de1930ed-0072-4756-b109-5b94cee67150-config-volume\") pod \"coredns-5dd5756b68-q7dxc\" (UID: \"de1930ed-0072-4756-b109-5b94cee67150\") " pod="kube-system/coredns-5dd5756b68-q7dxc" Feb 9 19:16:21.299725 env[1649]: time="2024-02-09T19:16:21.298930412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7dxc,Uid:de1930ed-0072-4756-b109-5b94cee67150,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:21.319409 env[1649]: time="2024-02-09T19:16:21.319351642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4ts9k,Uid:ec17e1a9-2069-4d52-bb3f-bf72dc5b9edc,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:21.784835 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:16:23.585429 systemd-networkd[1449]: cilium_host: Link UP Feb 9 19:16:23.586718 systemd-networkd[1449]: cilium_net: Link UP Feb 9 19:16:23.593616 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:16:23.593760 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:16:23.593766 systemd-networkd[1449]: cilium_net: Gained carrier Feb 9 19:16:23.594485 systemd-networkd[1449]: cilium_host: Gained carrier Feb 9 19:16:23.594764 systemd-networkd[1449]: cilium_net: Gained IPv6LL Feb 9 19:16:23.595099 systemd-networkd[1449]: cilium_host: Gained IPv6LL Feb 9 19:16:23.597264 (udev-worker)[3490]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:23.599005 (udev-worker)[3529]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:23.762922 (udev-worker)[3539]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:23.771876 systemd-networkd[1449]: cilium_vxlan: Link UP Feb 9 19:16:23.771896 systemd-networkd[1449]: cilium_vxlan: Gained carrier Feb 9 19:16:24.240826 kernel: NET: Registered PF_ALG protocol family Feb 9 19:16:25.440012 systemd-networkd[1449]: cilium_vxlan: Gained IPv6LL Feb 9 19:16:25.534128 systemd-networkd[1449]: lxc_health: Link UP Feb 9 19:16:25.544848 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:16:25.543036 systemd-networkd[1449]: lxc_health: Gained carrier Feb 9 19:16:25.891154 systemd-networkd[1449]: lxcb5fbbb39b040: Link UP Feb 9 19:16:25.900844 kernel: eth0: renamed from tmpcdc35 Feb 9 19:16:25.907923 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb5fbbb39b040: link becomes ready Feb 9 19:16:25.907478 systemd-networkd[1449]: lxcb5fbbb39b040: Gained carrier Feb 9 19:16:25.958381 systemd-networkd[1449]: lxc8b03672c53c2: Link UP Feb 9 19:16:25.973351 kernel: eth0: renamed from tmpf4b0e Feb 9 19:16:25.982964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8b03672c53c2: link becomes ready Feb 9 19:16:25.983283 systemd-networkd[1449]: lxc8b03672c53c2: Gained carrier Feb 9 19:16:26.120382 kubelet[2727]: I0209 19:16:26.120310 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rkz9f" podStartSLOduration=11.541371781 podCreationTimestamp="2024-02-09 19:16:03 +0000 UTC" firstStartedPulling="2024-02-09 19:16:04.282583569 +0000 UTC m=+14.348718101" lastFinishedPulling="2024-02-09 19:16:15.861461345 +0000 UTC m=+25.927595901" observedRunningTime="2024-02-09 19:16:21.486912999 +0000 UTC m=+31.553047531" watchObservedRunningTime="2024-02-09 19:16:26.120249581 +0000 UTC m=+36.186384113" Feb 9 19:16:26.976104 systemd-networkd[1449]: lxc_health: Gained IPv6LL Feb 9 19:16:27.168055 systemd-networkd[1449]: lxcb5fbbb39b040: Gained IPv6LL Feb 9 19:16:27.488027 systemd-networkd[1449]: lxc8b03672c53c2: Gained IPv6LL Feb 9 19:16:34.325372 env[1649]: time="2024-02-09T19:16:34.325235734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:34.325967 env[1649]: time="2024-02-09T19:16:34.325395226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:34.325967 env[1649]: time="2024-02-09T19:16:34.325483554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:34.325967 env[1649]: time="2024-02-09T19:16:34.325855823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdc35b77d23033aafa85c0f1c39ed3f25ecd2948cbc946c871313a9415bd28c2 pid=3905 runtime=io.containerd.runc.v2 Feb 9 19:16:34.385391 systemd[1]: Started cri-containerd-cdc35b77d23033aafa85c0f1c39ed3f25ecd2948cbc946c871313a9415bd28c2.scope. Feb 9 19:16:34.388690 systemd[1]: run-containerd-runc-k8s.io-cdc35b77d23033aafa85c0f1c39ed3f25ecd2948cbc946c871313a9415bd28c2-runc.P5upZb.mount: Deactivated successfully. Feb 9 19:16:34.504529 env[1649]: time="2024-02-09T19:16:34.504452713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7dxc,Uid:de1930ed-0072-4756-b109-5b94cee67150,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdc35b77d23033aafa85c0f1c39ed3f25ecd2948cbc946c871313a9415bd28c2\"" Feb 9 19:16:34.511385 env[1649]: time="2024-02-09T19:16:34.511291199Z" level=info msg="CreateContainer within sandbox \"cdc35b77d23033aafa85c0f1c39ed3f25ecd2948cbc946c871313a9415bd28c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:34.542261 env[1649]: time="2024-02-09T19:16:34.542132492Z" level=info msg="CreateContainer within sandbox \"cdc35b77d23033aafa85c0f1c39ed3f25ecd2948cbc946c871313a9415bd28c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"927570f8dc26ac4fbce58f451dc6f77512b236c285f362008e509e0cb603ce1c\"" Feb 9 19:16:34.543572 env[1649]: time="2024-02-09T19:16:34.543491049Z" level=info msg="StartContainer for \"927570f8dc26ac4fbce58f451dc6f77512b236c285f362008e509e0cb603ce1c\"" Feb 9 19:16:34.587677 systemd[1]: Started cri-containerd-927570f8dc26ac4fbce58f451dc6f77512b236c285f362008e509e0cb603ce1c.scope. Feb 9 19:16:34.620662 env[1649]: time="2024-02-09T19:16:34.620483314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:34.620901 env[1649]: time="2024-02-09T19:16:34.620651615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:34.620901 env[1649]: time="2024-02-09T19:16:34.620738010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:34.621461 env[1649]: time="2024-02-09T19:16:34.621209816Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4b0e37242ca723c3bbe77badf1dbd67cf99d8db1bf70fd64225322fe5a11338 pid=3961 runtime=io.containerd.runc.v2 Feb 9 19:16:34.652924 systemd[1]: Started cri-containerd-f4b0e37242ca723c3bbe77badf1dbd67cf99d8db1bf70fd64225322fe5a11338.scope. Feb 9 19:16:34.764123 env[1649]: time="2024-02-09T19:16:34.764030100Z" level=info msg="StartContainer for \"927570f8dc26ac4fbce58f451dc6f77512b236c285f362008e509e0cb603ce1c\" returns successfully" Feb 9 19:16:34.837220 env[1649]: time="2024-02-09T19:16:34.837151406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4ts9k,Uid:ec17e1a9-2069-4d52-bb3f-bf72dc5b9edc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4b0e37242ca723c3bbe77badf1dbd67cf99d8db1bf70fd64225322fe5a11338\"" Feb 9 19:16:34.844224 env[1649]: time="2024-02-09T19:16:34.844046332Z" level=info msg="CreateContainer within sandbox \"f4b0e37242ca723c3bbe77badf1dbd67cf99d8db1bf70fd64225322fe5a11338\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:34.873378 env[1649]: time="2024-02-09T19:16:34.873288160Z" level=info msg="CreateContainer within sandbox \"f4b0e37242ca723c3bbe77badf1dbd67cf99d8db1bf70fd64225322fe5a11338\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a7c891814d929b6513f65c8607070be98fdc4cfb6d58e97339c8f77c6ce3d73\"" Feb 9 19:16:34.875066 env[1649]: time="2024-02-09T19:16:34.874999318Z" level=info msg="StartContainer for \"5a7c891814d929b6513f65c8607070be98fdc4cfb6d58e97339c8f77c6ce3d73\"" Feb 9 19:16:34.912992 systemd[1]: Started cri-containerd-5a7c891814d929b6513f65c8607070be98fdc4cfb6d58e97339c8f77c6ce3d73.scope. Feb 9 19:16:34.999007 env[1649]: time="2024-02-09T19:16:34.998939383Z" level=info msg="StartContainer for \"5a7c891814d929b6513f65c8607070be98fdc4cfb6d58e97339c8f77c6ce3d73\" returns successfully" Feb 9 19:16:35.515103 kubelet[2727]: I0209 19:16:35.515048 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4ts9k" podStartSLOduration=31.514970522 podCreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:35.511123567 +0000 UTC m=+45.577258099" watchObservedRunningTime="2024-02-09 19:16:35.514970522 +0000 UTC m=+45.581105054" Feb 9 19:16:35.537060 kubelet[2727]: I0209 19:16:35.537001 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q7dxc" podStartSLOduration=31.536929045 podCreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:35.535664852 +0000 UTC m=+45.601799396" watchObservedRunningTime="2024-02-09 19:16:35.536929045 +0000 UTC m=+45.603063589" Feb 9 19:16:37.551350 systemd[1]: Started sshd@5-172.31.23.244:22-147.75.109.163:42936.service. Feb 9 19:16:37.730399 sshd[4067]: Accepted publickey for core from 147.75.109.163 port 42936 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:37.733129 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:37.741417 systemd-logind[1638]: New session 6 of user core. Feb 9 19:16:37.742479 systemd[1]: Started session-6.scope. Feb 9 19:16:37.997673 sshd[4067]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:38.003141 systemd[1]: sshd@5-172.31.23.244:22-147.75.109.163:42936.service: Deactivated successfully. Feb 9 19:16:38.004545 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:16:38.006079 systemd-logind[1638]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:16:38.007743 systemd-logind[1638]: Removed session 6. Feb 9 19:16:43.025654 systemd[1]: Started sshd@6-172.31.23.244:22-147.75.109.163:42942.service. Feb 9 19:16:43.199855 sshd[4080]: Accepted publickey for core from 147.75.109.163 port 42942 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:43.203153 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:43.211830 systemd[1]: Started session-7.scope. Feb 9 19:16:43.213635 systemd-logind[1638]: New session 7 of user core. Feb 9 19:16:43.464307 sshd[4080]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:43.468567 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:16:43.469908 systemd-logind[1638]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:16:43.470186 systemd[1]: sshd@6-172.31.23.244:22-147.75.109.163:42942.service: Deactivated successfully. Feb 9 19:16:43.473015 systemd-logind[1638]: Removed session 7. Feb 9 19:16:48.495886 systemd[1]: Started sshd@7-172.31.23.244:22-147.75.109.163:58696.service. Feb 9 19:16:48.670924 sshd[4093]: Accepted publickey for core from 147.75.109.163 port 58696 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:48.673222 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:48.682606 systemd[1]: Started session-8.scope. Feb 9 19:16:48.683240 systemd-logind[1638]: New session 8 of user core. Feb 9 19:16:48.928477 sshd[4093]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:48.933210 systemd[1]: sshd@7-172.31.23.244:22-147.75.109.163:58696.service: Deactivated successfully. Feb 9 19:16:48.934509 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:16:48.936682 systemd-logind[1638]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:16:48.938253 systemd-logind[1638]: Removed session 8. Feb 9 19:16:53.958957 systemd[1]: Started sshd@8-172.31.23.244:22-147.75.109.163:58706.service. Feb 9 19:16:54.136024 sshd[4108]: Accepted publickey for core from 147.75.109.163 port 58706 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:54.139142 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:54.148188 systemd-logind[1638]: New session 9 of user core. Feb 9 19:16:54.149103 systemd[1]: Started session-9.scope. Feb 9 19:16:54.410367 sshd[4108]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:54.415011 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:16:54.416586 systemd[1]: sshd@8-172.31.23.244:22-147.75.109.163:58706.service: Deactivated successfully. Feb 9 19:16:54.418065 systemd-logind[1638]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:16:54.419853 systemd-logind[1638]: Removed session 9. Feb 9 19:16:59.438211 systemd[1]: Started sshd@9-172.31.23.244:22-147.75.109.163:50916.service. Feb 9 19:16:59.608171 sshd[4122]: Accepted publickey for core from 147.75.109.163 port 50916 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:59.611434 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:59.619181 systemd-logind[1638]: New session 10 of user core. Feb 9 19:16:59.620243 systemd[1]: Started session-10.scope. Feb 9 19:16:59.866903 sshd[4122]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:59.872168 systemd-logind[1638]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:16:59.872570 systemd[1]: sshd@9-172.31.23.244:22-147.75.109.163:50916.service: Deactivated successfully. Feb 9 19:16:59.874140 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:16:59.876314 systemd-logind[1638]: Removed session 10. Feb 9 19:16:59.894487 systemd[1]: Started sshd@10-172.31.23.244:22-147.75.109.163:50920.service. Feb 9 19:17:00.067210 sshd[4135]: Accepted publickey for core from 147.75.109.163 port 50920 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:00.070260 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:00.077885 systemd-logind[1638]: New session 11 of user core. Feb 9 19:17:00.079099 systemd[1]: Started session-11.scope. Feb 9 19:17:01.750514 sshd[4135]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:01.756696 systemd[1]: sshd@10-172.31.23.244:22-147.75.109.163:50920.service: Deactivated successfully. Feb 9 19:17:01.758592 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:17:01.760775 systemd-logind[1638]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:17:01.762655 systemd-logind[1638]: Removed session 11. Feb 9 19:17:01.779516 systemd[1]: Started sshd@11-172.31.23.244:22-147.75.109.163:50924.service. Feb 9 19:17:01.962278 sshd[4147]: Accepted publickey for core from 147.75.109.163 port 50924 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:01.964740 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:01.971884 systemd-logind[1638]: New session 12 of user core. Feb 9 19:17:01.973540 systemd[1]: Started session-12.scope. Feb 9 19:17:02.224961 sshd[4147]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:02.230077 systemd[1]: sshd@11-172.31.23.244:22-147.75.109.163:50924.service: Deactivated successfully. Feb 9 19:17:02.232195 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:17:02.234195 systemd-logind[1638]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:17:02.236731 systemd-logind[1638]: Removed session 12. Feb 9 19:17:07.256203 systemd[1]: Started sshd@12-172.31.23.244:22-147.75.109.163:52380.service. Feb 9 19:17:07.435473 sshd[4161]: Accepted publickey for core from 147.75.109.163 port 52380 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:07.438147 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:07.447165 systemd-logind[1638]: New session 13 of user core. Feb 9 19:17:07.447539 systemd[1]: Started session-13.scope. Feb 9 19:17:07.705455 sshd[4161]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:07.712184 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:17:07.714087 systemd-logind[1638]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:17:07.714271 systemd[1]: sshd@12-172.31.23.244:22-147.75.109.163:52380.service: Deactivated successfully. Feb 9 19:17:07.716938 systemd-logind[1638]: Removed session 13. Feb 9 19:17:12.734771 systemd[1]: Started sshd@13-172.31.23.244:22-147.75.109.163:52396.service. Feb 9 19:17:12.913398 sshd[4173]: Accepted publickey for core from 147.75.109.163 port 52396 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:12.916817 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:12.925527 systemd[1]: Started session-14.scope. Feb 9 19:17:12.926727 systemd-logind[1638]: New session 14 of user core. Feb 9 19:17:13.168478 sshd[4173]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:13.173223 systemd[1]: sshd@13-172.31.23.244:22-147.75.109.163:52396.service: Deactivated successfully. Feb 9 19:17:13.174584 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:17:13.175815 systemd-logind[1638]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:17:13.177447 systemd-logind[1638]: Removed session 14. Feb 9 19:17:18.198991 systemd[1]: Started sshd@14-172.31.23.244:22-147.75.109.163:33462.service. Feb 9 19:17:18.378398 sshd[4185]: Accepted publickey for core from 147.75.109.163 port 33462 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:18.381425 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:18.389132 systemd-logind[1638]: New session 15 of user core. Feb 9 19:17:18.390161 systemd[1]: Started session-15.scope. Feb 9 19:17:18.643872 sshd[4185]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:18.648450 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:17:18.649706 systemd-logind[1638]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:17:18.651033 systemd[1]: sshd@14-172.31.23.244:22-147.75.109.163:33462.service: Deactivated successfully. Feb 9 19:17:18.653561 systemd-logind[1638]: Removed session 15. Feb 9 19:17:18.671104 systemd[1]: Started sshd@15-172.31.23.244:22-147.75.109.163:33466.service. Feb 9 19:17:18.840556 sshd[4197]: Accepted publickey for core from 147.75.109.163 port 33466 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:18.843556 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:18.852578 systemd[1]: Started session-16.scope. Feb 9 19:17:18.853919 systemd-logind[1638]: New session 16 of user core. Feb 9 19:17:19.145176 sshd[4197]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:19.150224 systemd[1]: sshd@15-172.31.23.244:22-147.75.109.163:33466.service: Deactivated successfully. Feb 9 19:17:19.151602 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:17:19.152855 systemd-logind[1638]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:17:19.154589 systemd-logind[1638]: Removed session 16. Feb 9 19:17:19.176387 systemd[1]: Started sshd@16-172.31.23.244:22-147.75.109.163:33482.service. Feb 9 19:17:19.353571 sshd[4206]: Accepted publickey for core from 147.75.109.163 port 33482 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:19.356061 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:19.364921 systemd-logind[1638]: New session 17 of user core. Feb 9 19:17:19.366296 systemd[1]: Started session-17.scope. Feb 9 19:17:20.764935 sshd[4206]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:20.771747 systemd[1]: sshd@16-172.31.23.244:22-147.75.109.163:33482.service: Deactivated successfully. Feb 9 19:17:20.773116 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:17:20.773899 systemd-logind[1638]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:17:20.775543 systemd-logind[1638]: Removed session 17. Feb 9 19:17:20.795039 systemd[1]: Started sshd@17-172.31.23.244:22-147.75.109.163:33496.service. Feb 9 19:17:20.997034 sshd[4222]: Accepted publickey for core from 147.75.109.163 port 33496 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:20.999654 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:21.009202 systemd[1]: Started session-18.scope. Feb 9 19:17:21.009992 systemd-logind[1638]: New session 18 of user core. Feb 9 19:17:21.631580 sshd[4222]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:21.637041 systemd-logind[1638]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:17:21.638216 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:17:21.639461 systemd[1]: sshd@17-172.31.23.244:22-147.75.109.163:33496.service: Deactivated successfully. Feb 9 19:17:21.642452 systemd-logind[1638]: Removed session 18. Feb 9 19:17:21.658433 systemd[1]: Started sshd@18-172.31.23.244:22-147.75.109.163:33508.service. Feb 9 19:17:21.832533 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 33508 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:21.835630 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:21.843374 systemd-logind[1638]: New session 19 of user core. Feb 9 19:17:21.844429 systemd[1]: Started session-19.scope. Feb 9 19:17:22.092983 sshd[4233]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:22.098368 systemd[1]: sshd@18-172.31.23.244:22-147.75.109.163:33508.service: Deactivated successfully. Feb 9 19:17:22.099671 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:17:22.101743 systemd-logind[1638]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:17:22.104470 systemd-logind[1638]: Removed session 19. Feb 9 19:17:27.124474 systemd[1]: Started sshd@19-172.31.23.244:22-147.75.109.163:35926.service. Feb 9 19:17:27.298460 sshd[4246]: Accepted publickey for core from 147.75.109.163 port 35926 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:27.301828 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:27.309903 systemd-logind[1638]: New session 20 of user core. Feb 9 19:17:27.312621 systemd[1]: Started session-20.scope. Feb 9 19:17:27.557384 sshd[4246]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:27.562520 systemd-logind[1638]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:17:27.563047 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:17:27.564210 systemd[1]: sshd@19-172.31.23.244:22-147.75.109.163:35926.service: Deactivated successfully. Feb 9 19:17:27.566216 systemd-logind[1638]: Removed session 20. Feb 9 19:17:32.587570 systemd[1]: Started sshd@20-172.31.23.244:22-147.75.109.163:35934.service. Feb 9 19:17:32.761959 sshd[4261]: Accepted publickey for core from 147.75.109.163 port 35934 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:32.763855 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:32.772131 systemd-logind[1638]: New session 21 of user core. Feb 9 19:17:32.773113 systemd[1]: Started session-21.scope. Feb 9 19:17:33.028267 sshd[4261]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:33.032714 systemd[1]: sshd@20-172.31.23.244:22-147.75.109.163:35934.service: Deactivated successfully. Feb 9 19:17:33.034059 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:17:33.035517 systemd-logind[1638]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:17:33.037923 systemd-logind[1638]: Removed session 21. Feb 9 19:17:38.058407 systemd[1]: Started sshd@21-172.31.23.244:22-147.75.109.163:32836.service. Feb 9 19:17:38.236426 sshd[4275]: Accepted publickey for core from 147.75.109.163 port 32836 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:38.238956 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:38.247211 systemd-logind[1638]: New session 22 of user core. Feb 9 19:17:38.248400 systemd[1]: Started session-22.scope. Feb 9 19:17:38.491167 sshd[4275]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:38.496010 systemd-logind[1638]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:17:38.498167 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:17:38.499321 systemd[1]: sshd@21-172.31.23.244:22-147.75.109.163:32836.service: Deactivated successfully. Feb 9 19:17:38.502346 systemd-logind[1638]: Removed session 22. Feb 9 19:17:43.522543 systemd[1]: Started sshd@22-172.31.23.244:22-147.75.109.163:32844.service. Feb 9 19:17:43.704429 sshd[4288]: Accepted publickey for core from 147.75.109.163 port 32844 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:43.708525 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:43.718517 systemd[1]: Started session-23.scope. Feb 9 19:17:43.719316 systemd-logind[1638]: New session 23 of user core. Feb 9 19:17:43.971007 sshd[4288]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:43.976443 systemd[1]: sshd@22-172.31.23.244:22-147.75.109.163:32844.service: Deactivated successfully. Feb 9 19:17:43.978978 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:17:43.981461 systemd-logind[1638]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:17:43.984467 systemd-logind[1638]: Removed session 23. Feb 9 19:17:43.998851 systemd[1]: Started sshd@23-172.31.23.244:22-147.75.109.163:32860.service. Feb 9 19:17:44.177256 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 32860 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:44.180489 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:44.187863 systemd-logind[1638]: New session 24 of user core. Feb 9 19:17:44.189172 systemd[1]: Started session-24.scope. Feb 9 19:17:46.726819 env[1649]: time="2024-02-09T19:17:46.723704014Z" level=info msg="StopContainer for \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\" with timeout 30 (s)" Feb 9 19:17:46.726819 env[1649]: time="2024-02-09T19:17:46.724447393Z" level=info msg="Stop container \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\" with signal terminated" Feb 9 19:17:46.758109 systemd[1]: run-containerd-runc-k8s.io-0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765-runc.YntPkx.mount: Deactivated successfully. Feb 9 19:17:46.765373 systemd[1]: cri-containerd-7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591.scope: Deactivated successfully. Feb 9 19:17:46.809699 env[1649]: time="2024-02-09T19:17:46.809576005Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:17:46.816185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591-rootfs.mount: Deactivated successfully. Feb 9 19:17:46.834353 env[1649]: time="2024-02-09T19:17:46.834288504Z" level=info msg="StopContainer for \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\" with timeout 2 (s)" Feb 9 19:17:46.835481 env[1649]: time="2024-02-09T19:17:46.835375360Z" level=info msg="Stop container \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\" with signal terminated" Feb 9 19:17:46.843843 env[1649]: time="2024-02-09T19:17:46.843752588Z" level=info msg="shim disconnected" id=7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591 Feb 9 19:17:46.844039 env[1649]: time="2024-02-09T19:17:46.843840215Z" level=warning msg="cleaning up after shim disconnected" id=7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591 namespace=k8s.io Feb 9 19:17:46.844039 env[1649]: time="2024-02-09T19:17:46.843863304Z" level=info msg="cleaning up dead shim" Feb 9 19:17:46.856459 systemd-networkd[1449]: lxc_health: Link DOWN Feb 9 19:17:46.856478 systemd-networkd[1449]: lxc_health: Lost carrier Feb 9 19:17:46.883162 systemd[1]: cri-containerd-0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765.scope: Deactivated successfully. Feb 9 19:17:46.884482 env[1649]: time="2024-02-09T19:17:46.883239964Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4346 runtime=io.containerd.runc.v2\n" Feb 9 19:17:46.883727 systemd[1]: cri-containerd-0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765.scope: Consumed 14.378s CPU time. Feb 9 19:17:46.888618 env[1649]: time="2024-02-09T19:17:46.888549077Z" level=info msg="StopContainer for \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\" returns successfully" Feb 9 19:17:46.889511 env[1649]: time="2024-02-09T19:17:46.889441658Z" level=info msg="StopPodSandbox for \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\"" Feb 9 19:17:46.889652 env[1649]: time="2024-02-09T19:17:46.889546530Z" level=info msg="Container to stop \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:46.896367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6-shm.mount: Deactivated successfully. Feb 9 19:17:46.913013 systemd[1]: cri-containerd-2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6.scope: Deactivated successfully. Feb 9 19:17:46.927480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765-rootfs.mount: Deactivated successfully. Feb 9 19:17:46.942675 env[1649]: time="2024-02-09T19:17:46.942587247Z" level=info msg="shim disconnected" id=0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765 Feb 9 19:17:46.942675 env[1649]: time="2024-02-09T19:17:46.942662129Z" level=warning msg="cleaning up after shim disconnected" id=0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765 namespace=k8s.io Feb 9 19:17:46.943111 env[1649]: time="2024-02-09T19:17:46.942687834Z" level=info msg="cleaning up dead shim" Feb 9 19:17:46.974753 env[1649]: time="2024-02-09T19:17:46.974669765Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4394 runtime=io.containerd.runc.v2\n" Feb 9 19:17:46.976352 env[1649]: time="2024-02-09T19:17:46.976266615Z" level=info msg="shim disconnected" id=2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6 Feb 9 19:17:46.976706 env[1649]: time="2024-02-09T19:17:46.976670022Z" level=warning msg="cleaning up after shim disconnected" id=2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6 namespace=k8s.io Feb 9 19:17:46.978178 env[1649]: time="2024-02-09T19:17:46.976828548Z" level=info msg="cleaning up dead shim" Feb 9 19:17:46.981727 env[1649]: time="2024-02-09T19:17:46.981666536Z" level=info msg="StopContainer for \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\" returns successfully" Feb 9 19:17:46.982735 env[1649]: time="2024-02-09T19:17:46.982685925Z" level=info msg="StopPodSandbox for \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\"" Feb 9 19:17:46.983124 env[1649]: time="2024-02-09T19:17:46.983016105Z" level=info msg="Container to stop \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:46.983282 env[1649]: time="2024-02-09T19:17:46.983248397Z" level=info msg="Container to stop \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:46.983442 env[1649]: time="2024-02-09T19:17:46.983408135Z" level=info msg="Container to stop \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:46.983861 env[1649]: time="2024-02-09T19:17:46.983822918Z" level=info msg="Container to stop \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:46.984035 env[1649]: time="2024-02-09T19:17:46.984002445Z" level=info msg="Container to stop \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:46.995394 env[1649]: time="2024-02-09T19:17:46.995337625Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4413 runtime=io.containerd.runc.v2\n" Feb 9 19:17:46.997066 systemd[1]: cri-containerd-67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88.scope: Deactivated successfully. Feb 9 19:17:46.999348 env[1649]: time="2024-02-09T19:17:46.998683839Z" level=info msg="TearDown network for sandbox \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" successfully" Feb 9 19:17:46.999348 env[1649]: time="2024-02-09T19:17:46.998772882Z" level=info msg="StopPodSandbox for \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" returns successfully" Feb 9 19:17:47.062389 env[1649]: time="2024-02-09T19:17:47.062311755Z" level=info msg="shim disconnected" id=67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88 Feb 9 19:17:47.062389 env[1649]: time="2024-02-09T19:17:47.062382533Z" level=warning msg="cleaning up after shim disconnected" id=67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88 namespace=k8s.io Feb 9 19:17:47.062718 env[1649]: time="2024-02-09T19:17:47.062405430Z" level=info msg="cleaning up dead shim" Feb 9 19:17:47.078465 env[1649]: time="2024-02-09T19:17:47.078406460Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4447 runtime=io.containerd.runc.v2\n" Feb 9 19:17:47.079283 env[1649]: time="2024-02-09T19:17:47.079236003Z" level=info msg="TearDown network for sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" successfully" Feb 9 19:17:47.079514 env[1649]: time="2024-02-09T19:17:47.079478448Z" level=info msg="StopPodSandbox for \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" returns successfully" Feb 9 19:17:47.127300 kubelet[2727]: I0209 19:17:47.127261 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-run\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.128240 kubelet[2727]: I0209 19:17:47.128210 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-etc-cni-netd\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.128457 kubelet[2727]: I0209 19:17:47.128417 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-kernel\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.128599 kubelet[2727]: I0209 19:17:47.127244 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.128740 kubelet[2727]: I0209 19:17:47.128264 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.128908 kubelet[2727]: I0209 19:17:47.128493 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.129048 kubelet[2727]: I0209 19:17:47.128634 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-config-path\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.129236 kubelet[2727]: I0209 19:17:47.129203 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-cilium-config-path\") pod \"a3e7f9c3-beda-4f45-940b-b4648e2bf14e\" (UID: \"a3e7f9c3-beda-4f45-940b-b4648e2bf14e\") " Feb 9 19:17:47.129416 kubelet[2727]: I0209 19:17:47.129383 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7dt7\" (UniqueName: \"kubernetes.io/projected/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-kube-api-access-q7dt7\") pod \"a3e7f9c3-beda-4f45-940b-b4648e2bf14e\" (UID: \"a3e7f9c3-beda-4f45-940b-b4648e2bf14e\") " Feb 9 19:17:47.129580 kubelet[2727]: I0209 19:17:47.129557 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-hostproc\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.129746 kubelet[2727]: I0209 19:17:47.129725 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-cgroup\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.129957 kubelet[2727]: I0209 19:17:47.129935 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-bpf-maps\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.130159 kubelet[2727]: I0209 19:17:47.130137 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-lib-modules\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.130364 kubelet[2727]: I0209 19:17:47.130312 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-hubble-tls\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.130582 kubelet[2727]: I0209 19:17:47.130560 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-net\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.130858 kubelet[2727]: I0209 19:17:47.130768 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-xtables-lock\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.131180 kubelet[2727]: I0209 19:17:47.131157 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0dadab27-183d-4cf2-ae0c-d168e1026d98-clustermesh-secrets\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.131368 kubelet[2727]: I0209 19:17:47.131347 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5hvx\" (UniqueName: \"kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-kube-api-access-q5hvx\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.131541 kubelet[2727]: I0209 19:17:47.131501 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cni-path\") pod \"0dadab27-183d-4cf2-ae0c-d168e1026d98\" (UID: \"0dadab27-183d-4cf2-ae0c-d168e1026d98\") " Feb 9 19:17:47.131718 kubelet[2727]: I0209 19:17:47.131696 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-kernel\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.131860 kubelet[2727]: I0209 19:17:47.131839 2727 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-etc-cni-netd\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.132088 kubelet[2727]: I0209 19:17:47.132060 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cni-path" (OuterVolumeSpecName: "cni-path") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.133759 kubelet[2727]: I0209 19:17:47.133705 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:17:47.133944 kubelet[2727]: I0209 19:17:47.133853 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.138867 kubelet[2727]: I0209 19:17:47.138764 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-kube-api-access-q7dt7" (OuterVolumeSpecName: "kube-api-access-q7dt7") pod "a3e7f9c3-beda-4f45-940b-b4648e2bf14e" (UID: "a3e7f9c3-beda-4f45-940b-b4648e2bf14e"). InnerVolumeSpecName "kube-api-access-q7dt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:47.139033 kubelet[2727]: I0209 19:17:47.138910 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-hostproc" (OuterVolumeSpecName: "hostproc") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.139033 kubelet[2727]: I0209 19:17:47.138956 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.139033 kubelet[2727]: I0209 19:17:47.138998 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.139231 kubelet[2727]: I0209 19:17:47.139041 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.139424 kubelet[2727]: I0209 19:17:47.138815 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3e7f9c3-beda-4f45-940b-b4648e2bf14e" (UID: "a3e7f9c3-beda-4f45-940b-b4648e2bf14e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:17:47.145331 kubelet[2727]: I0209 19:17:47.145256 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:47.145506 kubelet[2727]: I0209 19:17:47.145351 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:47.148696 kubelet[2727]: I0209 19:17:47.148645 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dadab27-183d-4cf2-ae0c-d168e1026d98-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:17:47.150898 kubelet[2727]: I0209 19:17:47.150843 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-kube-api-access-q5hvx" (OuterVolumeSpecName: "kube-api-access-q5hvx") pod "0dadab27-183d-4cf2-ae0c-d168e1026d98" (UID: "0dadab27-183d-4cf2-ae0c-d168e1026d98"). InnerVolumeSpecName "kube-api-access-q5hvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:47.233278 kubelet[2727]: I0209 19:17:47.233148 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-run\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.233491 kubelet[2727]: I0209 19:17:47.233467 2727 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-hostproc\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.233645 kubelet[2727]: I0209 19:17:47.233596 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-cgroup\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.233830 kubelet[2727]: I0209 19:17:47.233782 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0dadab27-183d-4cf2-ae0c-d168e1026d98-cilium-config-path\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.234023 kubelet[2727]: I0209 19:17:47.234002 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-cilium-config-path\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.234207 kubelet[2727]: I0209 19:17:47.234185 2727 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q7dt7\" (UniqueName: \"kubernetes.io/projected/a3e7f9c3-beda-4f45-940b-b4648e2bf14e-kube-api-access-q7dt7\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.234339 kubelet[2727]: I0209 19:17:47.234317 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-host-proc-sys-net\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.234508 kubelet[2727]: I0209 19:17:47.234488 2727 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-xtables-lock\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.234691 kubelet[2727]: I0209 19:17:47.234671 2727 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-bpf-maps\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.234847 kubelet[2727]: I0209 19:17:47.234826 2727 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-lib-modules\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.234964 kubelet[2727]: I0209 19:17:47.234944 2727 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-hubble-tls\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.235087 kubelet[2727]: I0209 19:17:47.235067 2727 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q5hvx\" (UniqueName: \"kubernetes.io/projected/0dadab27-183d-4cf2-ae0c-d168e1026d98-kube-api-access-q5hvx\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.235221 kubelet[2727]: I0209 19:17:47.235200 2727 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0dadab27-183d-4cf2-ae0c-d168e1026d98-cni-path\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.235344 kubelet[2727]: I0209 19:17:47.235324 2727 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0dadab27-183d-4cf2-ae0c-d168e1026d98-clustermesh-secrets\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:47.683659 kubelet[2727]: I0209 19:17:47.683623 2727 scope.go:117] "RemoveContainer" containerID="0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765" Feb 9 19:17:47.687949 env[1649]: time="2024-02-09T19:17:47.687875696Z" level=info msg="RemoveContainer for \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\"" Feb 9 19:17:47.695459 env[1649]: time="2024-02-09T19:17:47.695375648Z" level=info msg="RemoveContainer for \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\" returns successfully" Feb 9 19:17:47.716849 kubelet[2727]: I0209 19:17:47.714495 2727 scope.go:117] "RemoveContainer" containerID="8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081" Feb 9 19:17:47.717266 env[1649]: time="2024-02-09T19:17:47.716223176Z" level=info msg="RemoveContainer for \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\"" Feb 9 19:17:47.718356 systemd[1]: Removed slice kubepods-burstable-pod0dadab27_183d_4cf2_ae0c_d168e1026d98.slice. Feb 9 19:17:47.718581 systemd[1]: kubepods-burstable-pod0dadab27_183d_4cf2_ae0c_d168e1026d98.slice: Consumed 14.594s CPU time. Feb 9 19:17:47.725563 env[1649]: time="2024-02-09T19:17:47.725498294Z" level=info msg="RemoveContainer for \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\" returns successfully" Feb 9 19:17:47.726560 systemd[1]: Removed slice kubepods-besteffort-poda3e7f9c3_beda_4f45_940b_b4648e2bf14e.slice. Feb 9 19:17:47.730085 kubelet[2727]: I0209 19:17:47.730016 2727 scope.go:117] "RemoveContainer" containerID="8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70" Feb 9 19:17:47.735675 env[1649]: time="2024-02-09T19:17:47.735622119Z" level=info msg="RemoveContainer for \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\"" Feb 9 19:17:47.742471 env[1649]: time="2024-02-09T19:17:47.742385929Z" level=info msg="RemoveContainer for \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\" returns successfully" Feb 9 19:17:47.743232 kubelet[2727]: I0209 19:17:47.743193 2727 scope.go:117] "RemoveContainer" containerID="b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad" Feb 9 19:17:47.750512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6-rootfs.mount: Deactivated successfully. Feb 9 19:17:47.755400 env[1649]: time="2024-02-09T19:17:47.752222739Z" level=info msg="RemoveContainer for \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\"" Feb 9 19:17:47.750681 systemd[1]: var-lib-kubelet-pods-a3e7f9c3\x2dbeda\x2d4f45\x2d940b\x2db4648e2bf14e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7dt7.mount: Deactivated successfully. Feb 9 19:17:47.750839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88-rootfs.mount: Deactivated successfully. Feb 9 19:17:47.750988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88-shm.mount: Deactivated successfully. Feb 9 19:17:47.751121 systemd[1]: var-lib-kubelet-pods-0dadab27\x2d183d\x2d4cf2\x2dae0c\x2dd168e1026d98-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq5hvx.mount: Deactivated successfully. Feb 9 19:17:47.751260 systemd[1]: var-lib-kubelet-pods-0dadab27\x2d183d\x2d4cf2\x2dae0c\x2dd168e1026d98-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:17:47.751402 systemd[1]: var-lib-kubelet-pods-0dadab27\x2d183d\x2d4cf2\x2dae0c\x2dd168e1026d98-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:17:47.761122 env[1649]: time="2024-02-09T19:17:47.761063153Z" level=info msg="RemoveContainer for \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\" returns successfully" Feb 9 19:17:47.762252 kubelet[2727]: I0209 19:17:47.762217 2727 scope.go:117] "RemoveContainer" containerID="7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce" Feb 9 19:17:47.768366 env[1649]: time="2024-02-09T19:17:47.768286435Z" level=info msg="RemoveContainer for \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\"" Feb 9 19:17:47.776123 env[1649]: time="2024-02-09T19:17:47.776049617Z" level=info msg="RemoveContainer for \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\" returns successfully" Feb 9 19:17:47.776677 kubelet[2727]: I0209 19:17:47.776615 2727 scope.go:117] "RemoveContainer" containerID="0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765" Feb 9 19:17:47.777274 env[1649]: time="2024-02-09T19:17:47.777161578Z" level=error msg="ContainerStatus for \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\": not found" Feb 9 19:17:47.778302 kubelet[2727]: E0209 19:17:47.777715 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\": not found" containerID="0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765" Feb 9 19:17:47.778302 kubelet[2727]: I0209 19:17:47.777957 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765"} err="failed to get container status \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ead2c6054843b7be4e25c7ea6f9fc93d49545550469c79123f3560db2ebf765\": not found" Feb 9 19:17:47.778302 kubelet[2727]: I0209 19:17:47.777990 2727 scope.go:117] "RemoveContainer" containerID="8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081" Feb 9 19:17:47.778609 env[1649]: time="2024-02-09T19:17:47.778464274Z" level=error msg="ContainerStatus for \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\": not found" Feb 9 19:17:47.779545 kubelet[2727]: E0209 19:17:47.778842 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\": not found" containerID="8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081" Feb 9 19:17:47.779545 kubelet[2727]: I0209 19:17:47.778928 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081"} err="failed to get container status \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\": rpc error: code = NotFound desc = an error occurred when try to find container \"8de765d29f54c4acdff287630a69e7329365a3e84a1879096b8770680329d081\": not found" Feb 9 19:17:47.779545 kubelet[2727]: I0209 19:17:47.778953 2727 scope.go:117] "RemoveContainer" containerID="8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70" Feb 9 19:17:47.781025 env[1649]: time="2024-02-09T19:17:47.780898588Z" level=error msg="ContainerStatus for \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\": not found" Feb 9 19:17:47.781565 kubelet[2727]: E0209 19:17:47.781366 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\": not found" containerID="8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70" Feb 9 19:17:47.781565 kubelet[2727]: I0209 19:17:47.781434 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70"} err="failed to get container status \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c697ba9c4a58f3a863d395e003b588af3b478925ef9256788ed5f465465bc70\": not found" Feb 9 19:17:47.781565 kubelet[2727]: I0209 19:17:47.781460 2727 scope.go:117] "RemoveContainer" containerID="b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad" Feb 9 19:17:47.781947 env[1649]: time="2024-02-09T19:17:47.781842759Z" level=error msg="ContainerStatus for \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\": not found" Feb 9 19:17:47.782458 kubelet[2727]: E0209 19:17:47.782255 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\": not found" containerID="b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad" Feb 9 19:17:47.782458 kubelet[2727]: I0209 19:17:47.782307 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad"} err="failed to get container status \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"b70d1bf8cb984ffd7d7ba090a278dda1e9397a03cfdc68222419032c5643f9ad\": not found" Feb 9 19:17:47.782458 kubelet[2727]: I0209 19:17:47.782333 2727 scope.go:117] "RemoveContainer" containerID="7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce" Feb 9 19:17:47.782725 env[1649]: time="2024-02-09T19:17:47.782632604Z" level=error msg="ContainerStatus for \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\": not found" Feb 9 19:17:47.784229 kubelet[2727]: E0209 19:17:47.784186 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\": not found" containerID="7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce" Feb 9 19:17:47.784442 kubelet[2727]: I0209 19:17:47.784256 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce"} err="failed to get container status \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cc6604892ce238a0528aa982ba4874974a7fe3113b3cab12cbef821554218ce\": not found" Feb 9 19:17:47.784442 kubelet[2727]: I0209 19:17:47.784281 2727 scope.go:117] "RemoveContainer" containerID="7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591" Feb 9 19:17:47.786846 env[1649]: time="2024-02-09T19:17:47.786767344Z" level=info msg="RemoveContainer for \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\"" Feb 9 19:17:47.791586 env[1649]: time="2024-02-09T19:17:47.791520804Z" level=info msg="RemoveContainer for \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\" returns successfully" Feb 9 19:17:47.792233 kubelet[2727]: I0209 19:17:47.792047 2727 scope.go:117] "RemoveContainer" containerID="7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591" Feb 9 19:17:47.792948 env[1649]: time="2024-02-09T19:17:47.792777370Z" level=error msg="ContainerStatus for \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\": not found" Feb 9 19:17:47.793368 kubelet[2727]: E0209 19:17:47.793331 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\": not found" containerID="7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591" Feb 9 19:17:47.793482 kubelet[2727]: I0209 19:17:47.793393 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591"} err="failed to get container status \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fe46b796a2c7a846365a94d811cb0ef361f8230fa2cf3c322fa67c241705591\": not found" Feb 9 19:17:48.251817 kubelet[2727]: I0209 19:17:48.251751 2727 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" path="/var/lib/kubelet/pods/0dadab27-183d-4cf2-ae0c-d168e1026d98/volumes" Feb 9 19:17:48.254017 kubelet[2727]: I0209 19:17:48.253987 2727 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a3e7f9c3-beda-4f45-940b-b4648e2bf14e" path="/var/lib/kubelet/pods/a3e7f9c3-beda-4f45-940b-b4648e2bf14e/volumes" Feb 9 19:17:48.650462 sshd[4300]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:48.655684 systemd[1]: sshd@23-172.31.23.244:22-147.75.109.163:32860.service: Deactivated successfully. Feb 9 19:17:48.657040 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:17:48.657371 systemd[1]: session-24.scope: Consumed 1.732s CPU time. Feb 9 19:17:48.659084 systemd-logind[1638]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:17:48.661467 systemd-logind[1638]: Removed session 24. Feb 9 19:17:48.677840 systemd[1]: Started sshd@24-172.31.23.244:22-147.75.109.163:49924.service. Feb 9 19:17:48.856512 sshd[4467]: Accepted publickey for core from 147.75.109.163 port 49924 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:48.859509 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:48.868258 systemd[1]: Started session-25.scope. Feb 9 19:17:48.869527 systemd-logind[1638]: New session 25 of user core. Feb 9 19:17:50.146598 sshd[4467]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:50.152695 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:17:50.153078 systemd[1]: session-25.scope: Consumed 1.060s CPU time. Feb 9 19:17:50.154197 systemd[1]: sshd@24-172.31.23.244:22-147.75.109.163:49924.service: Deactivated successfully. Feb 9 19:17:50.154236 systemd-logind[1638]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:17:50.157157 systemd-logind[1638]: Removed session 25. Feb 9 19:17:50.179457 systemd[1]: Started sshd@25-172.31.23.244:22-147.75.109.163:49936.service. Feb 9 19:17:50.193984 kubelet[2727]: I0209 19:17:50.193939 2727 topology_manager.go:215] "Topology Admit Handler" podUID="de63a6be-88da-4526-ade0-3a54a15b17c9" podNamespace="kube-system" podName="cilium-nb4j2" Feb 9 19:17:50.194674 kubelet[2727]: E0209 19:17:50.194642 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" containerName="clean-cilium-state" Feb 9 19:17:50.194879 kubelet[2727]: E0209 19:17:50.194856 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" containerName="cilium-agent" Feb 9 19:17:50.195037 kubelet[2727]: E0209 19:17:50.195015 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" containerName="mount-cgroup" Feb 9 19:17:50.195193 kubelet[2727]: E0209 19:17:50.195171 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" containerName="apply-sysctl-overwrites" Feb 9 19:17:50.195359 kubelet[2727]: E0209 19:17:50.195336 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3e7f9c3-beda-4f45-940b-b4648e2bf14e" containerName="cilium-operator" Feb 9 19:17:50.195503 kubelet[2727]: E0209 19:17:50.195481 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" containerName="mount-bpf-fs" Feb 9 19:17:50.195720 kubelet[2727]: I0209 19:17:50.195683 2727 memory_manager.go:346] "RemoveStaleState removing state" podUID="0dadab27-183d-4cf2-ae0c-d168e1026d98" containerName="cilium-agent" Feb 9 19:17:50.195895 kubelet[2727]: I0209 19:17:50.195873 2727 memory_manager.go:346] "RemoveStaleState removing state" podUID="a3e7f9c3-beda-4f45-940b-b4648e2bf14e" containerName="cilium-operator" Feb 9 19:17:50.215538 env[1649]: time="2024-02-09T19:17:50.215036445Z" level=info msg="StopPodSandbox for \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\"" Feb 9 19:17:50.215538 env[1649]: time="2024-02-09T19:17:50.215234933Z" level=info msg="TearDown network for sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" successfully" Feb 9 19:17:50.215538 env[1649]: time="2024-02-09T19:17:50.215323988Z" level=info msg="StopPodSandbox for \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" returns successfully" Feb 9 19:17:50.221097 systemd[1]: Created slice kubepods-burstable-podde63a6be_88da_4526_ade0_3a54a15b17c9.slice. Feb 9 19:17:50.231905 env[1649]: time="2024-02-09T19:17:50.230249487Z" level=info msg="RemovePodSandbox for \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\"" Feb 9 19:17:50.231905 env[1649]: time="2024-02-09T19:17:50.230335998Z" level=info msg="Forcibly stopping sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\"" Feb 9 19:17:50.231905 env[1649]: time="2024-02-09T19:17:50.230526542Z" level=info msg="TearDown network for sandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" successfully" Feb 9 19:17:50.238561 env[1649]: time="2024-02-09T19:17:50.238369874Z" level=info msg="RemovePodSandbox \"67e5d8fbd3382dec6eb286961c51b0872e1861ea43eea3120bf3d6edbccb7d88\" returns successfully" Feb 9 19:17:50.241402 env[1649]: time="2024-02-09T19:17:50.241324254Z" level=info msg="StopPodSandbox for \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\"" Feb 9 19:17:50.241848 env[1649]: time="2024-02-09T19:17:50.241712421Z" level=info msg="TearDown network for sandbox \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" successfully" Feb 9 19:17:50.242061 env[1649]: time="2024-02-09T19:17:50.242020389Z" level=info msg="StopPodSandbox for \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" returns successfully" Feb 9 19:17:50.249080 env[1649]: time="2024-02-09T19:17:50.248157220Z" level=info msg="RemovePodSandbox for \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\"" Feb 9 19:17:50.249080 env[1649]: time="2024-02-09T19:17:50.248240659Z" level=info msg="Forcibly stopping sandbox \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\"" Feb 9 19:17:50.249080 env[1649]: time="2024-02-09T19:17:50.248420354Z" level=info msg="TearDown network for sandbox \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" successfully" Feb 9 19:17:50.256671 kubelet[2727]: I0209 19:17:50.256618 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-hostproc\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.256920 kubelet[2727]: I0209 19:17:50.256714 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-bpf-maps\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.256920 kubelet[2727]: I0209 19:17:50.256824 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-ipsec-secrets\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.256920 kubelet[2727]: I0209 19:17:50.256873 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-run\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257103 kubelet[2727]: I0209 19:17:50.256947 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cni-path\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257103 kubelet[2727]: I0209 19:17:50.257015 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-lib-modules\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257103 kubelet[2727]: I0209 19:17:50.257082 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-hubble-tls\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257302 kubelet[2727]: I0209 19:17:50.257131 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-cgroup\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257302 kubelet[2727]: I0209 19:17:50.257198 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-xtables-lock\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257302 kubelet[2727]: I0209 19:17:50.257267 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-config-path\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257471 kubelet[2727]: I0209 19:17:50.257336 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpsbs\" (UniqueName: \"kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-kube-api-access-fpsbs\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257471 kubelet[2727]: I0209 19:17:50.257389 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-etc-cni-netd\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257471 kubelet[2727]: I0209 19:17:50.257456 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-clustermesh-secrets\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257672 kubelet[2727]: I0209 19:17:50.257546 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-net\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257672 kubelet[2727]: I0209 19:17:50.257617 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-kernel\") pod \"cilium-nb4j2\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " pod="kube-system/cilium-nb4j2" Feb 9 19:17:50.257979 env[1649]: time="2024-02-09T19:17:50.257925809Z" level=info msg="RemovePodSandbox \"2c4ecff28572b86737bb856fd8df17e5b70d344864e6ebb987ca095f06c40de6\" returns successfully" Feb 9 19:17:50.429407 sshd[4477]: Accepted publickey for core from 147.75.109.163 port 49936 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:50.432744 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:50.435963 kubelet[2727]: E0209 19:17:50.435907 2727 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:17:50.443041 systemd-logind[1638]: New session 26 of user core. Feb 9 19:17:50.446456 systemd[1]: Started session-26.scope. Feb 9 19:17:50.528985 env[1649]: time="2024-02-09T19:17:50.528928380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nb4j2,Uid:de63a6be-88da-4526-ade0-3a54a15b17c9,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:50.567594 env[1649]: time="2024-02-09T19:17:50.567109748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:50.567594 env[1649]: time="2024-02-09T19:17:50.567180863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:50.567594 env[1649]: time="2024-02-09T19:17:50.567207072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:50.568296 env[1649]: time="2024-02-09T19:17:50.568177057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb pid=4502 runtime=io.containerd.runc.v2 Feb 9 19:17:50.611328 systemd[1]: Started cri-containerd-730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb.scope. Feb 9 19:17:50.683085 env[1649]: time="2024-02-09T19:17:50.682054235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nb4j2,Uid:de63a6be-88da-4526-ade0-3a54a15b17c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb\"" Feb 9 19:17:50.692592 env[1649]: time="2024-02-09T19:17:50.692523507Z" level=info msg="CreateContainer within sandbox \"730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:17:50.734668 env[1649]: time="2024-02-09T19:17:50.734604784Z" level=info msg="CreateContainer within sandbox \"730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\"" Feb 9 19:17:50.735882 env[1649]: time="2024-02-09T19:17:50.735832947Z" level=info msg="StartContainer for \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\"" Feb 9 19:17:50.776022 systemd[1]: Started cri-containerd-f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1.scope. Feb 9 19:17:50.811122 systemd[1]: cri-containerd-f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1.scope: Deactivated successfully. Feb 9 19:17:50.829492 sshd[4477]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:50.835183 systemd-logind[1638]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:17:50.835509 systemd[1]: sshd@25-172.31.23.244:22-147.75.109.163:49936.service: Deactivated successfully. Feb 9 19:17:50.837339 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:17:50.839708 systemd-logind[1638]: Removed session 26. Feb 9 19:17:50.857543 env[1649]: time="2024-02-09T19:17:50.857345010Z" level=info msg="shim disconnected" id=f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1 Feb 9 19:17:50.857990 env[1649]: time="2024-02-09T19:17:50.857541193Z" level=warning msg="cleaning up after shim disconnected" id=f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1 namespace=k8s.io Feb 9 19:17:50.857990 env[1649]: time="2024-02-09T19:17:50.857972538Z" level=info msg="cleaning up dead shim" Feb 9 19:17:50.863192 systemd[1]: Started sshd@26-172.31.23.244:22-147.75.109.163:49950.service. Feb 9 19:17:50.900020 env[1649]: time="2024-02-09T19:17:50.899938298Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4566 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:17:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:17:50.900551 env[1649]: time="2024-02-09T19:17:50.900398096Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Feb 9 19:17:50.901799 env[1649]: time="2024-02-09T19:17:50.901718182Z" level=error msg="Failed to pipe stdout of container \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\"" error="reading from a closed fifo" Feb 9 19:17:50.901992 env[1649]: time="2024-02-09T19:17:50.901720331Z" level=error msg="Failed to pipe stderr of container \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\"" error="reading from a closed fifo" Feb 9 19:17:50.904566 env[1649]: time="2024-02-09T19:17:50.904481492Z" level=error msg="StartContainer for \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:17:50.905209 kubelet[2727]: E0209 19:17:50.905174 2727 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1" Feb 9 19:17:50.907071 kubelet[2727]: E0209 19:17:50.905467 2727 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:17:50.907071 kubelet[2727]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:17:50.907071 kubelet[2727]: rm /hostbin/cilium-mount Feb 9 19:17:50.907367 kubelet[2727]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fpsbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-nb4j2_kube-system(de63a6be-88da-4526-ade0-3a54a15b17c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:17:50.907367 kubelet[2727]: E0209 19:17:50.905559 2727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nb4j2" podUID="de63a6be-88da-4526-ade0-3a54a15b17c9" Feb 9 19:17:51.054957 sshd[4570]: Accepted publickey for core from 147.75.109.163 port 49950 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:51.057581 sshd[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:51.065228 systemd-logind[1638]: New session 27 of user core. Feb 9 19:17:51.066714 systemd[1]: Started session-27.scope. Feb 9 19:17:51.718238 env[1649]: time="2024-02-09T19:17:51.714886552Z" level=info msg="StopPodSandbox for \"730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb\"" Feb 9 19:17:51.718238 env[1649]: time="2024-02-09T19:17:51.714985820Z" level=info msg="Container to stop \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:51.717961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb-shm.mount: Deactivated successfully. Feb 9 19:17:51.735670 systemd[1]: cri-containerd-730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb.scope: Deactivated successfully. Feb 9 19:17:51.794403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb-rootfs.mount: Deactivated successfully. Feb 9 19:17:51.815678 env[1649]: time="2024-02-09T19:17:51.815581678Z" level=info msg="shim disconnected" id=730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb Feb 9 19:17:51.815678 env[1649]: time="2024-02-09T19:17:51.815667685Z" level=warning msg="cleaning up after shim disconnected" id=730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb namespace=k8s.io Feb 9 19:17:51.816114 env[1649]: time="2024-02-09T19:17:51.815693150Z" level=info msg="cleaning up dead shim" Feb 9 19:17:51.831809 env[1649]: time="2024-02-09T19:17:51.831691485Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4604 runtime=io.containerd.runc.v2\n" Feb 9 19:17:51.832307 env[1649]: time="2024-02-09T19:17:51.832259035Z" level=info msg="TearDown network for sandbox \"730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb\" successfully" Feb 9 19:17:51.832413 env[1649]: time="2024-02-09T19:17:51.832308117Z" level=info msg="StopPodSandbox for \"730881f2824763f5ef725c7ddda6e5ca6c582d00a57b94482f4fe0d6b42d96cb\" returns successfully" Feb 9 19:17:51.876638 kubelet[2727]: I0209 19:17:51.876582 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-cgroup\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.877350 kubelet[2727]: I0209 19:17:51.876701 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.877350 kubelet[2727]: I0209 19:17:51.876843 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-bpf-maps\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.877350 kubelet[2727]: I0209 19:17:51.876917 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.877350 kubelet[2727]: I0209 19:17:51.877004 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.877350 kubelet[2727]: I0209 19:17:51.877038 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cni-path\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.877350 kubelet[2727]: I0209 19:17:51.877118 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-config-path\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.881955 kubelet[2727]: I0209 19:17:51.881921 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-hubble-tls\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.883021 kubelet[2727]: I0209 19:17:51.882990 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-clustermesh-secrets\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.883230 kubelet[2727]: I0209 19:17:51.883186 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:17:51.883347 kubelet[2727]: I0209 19:17:51.883212 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-xtables-lock\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.883506 kubelet[2727]: I0209 19:17:51.883485 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-etc-cni-netd\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.883679 kubelet[2727]: I0209 19:17:51.883657 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-net\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.883932 kubelet[2727]: I0209 19:17:51.883909 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-ipsec-secrets\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.884096 kubelet[2727]: I0209 19:17:51.884061 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-run\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.884984 kubelet[2727]: I0209 19:17:51.884951 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-hostproc\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.885239 kubelet[2727]: I0209 19:17:51.885202 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-lib-modules\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.885333 kubelet[2727]: I0209 19:17:51.885267 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpsbs\" (UniqueName: \"kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-kube-api-access-fpsbs\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.885333 kubelet[2727]: I0209 19:17:51.885313 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-kernel\") pod \"de63a6be-88da-4526-ade0-3a54a15b17c9\" (UID: \"de63a6be-88da-4526-ade0-3a54a15b17c9\") " Feb 9 19:17:51.885459 kubelet[2727]: I0209 19:17:51.885381 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-cgroup\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.885459 kubelet[2727]: I0209 19:17:51.885413 2727 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cni-path\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.885459 kubelet[2727]: I0209 19:17:51.885440 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-config-path\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.885464 2727 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-bpf-maps\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.885498 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.885119 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.884239 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.884285 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.884313 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.884208 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.885653 kubelet[2727]: I0209 19:17:51.885609 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:51.895224 systemd[1]: var-lib-kubelet-pods-de63a6be\x2d88da\x2d4526\x2dade0\x2d3a54a15b17c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:17:51.899156 systemd[1]: var-lib-kubelet-pods-de63a6be\x2d88da\x2d4526\x2dade0\x2d3a54a15b17c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:17:51.904223 kubelet[2727]: I0209 19:17:51.904171 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:51.904583 kubelet[2727]: I0209 19:17:51.904499 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:17:51.905023 kubelet[2727]: I0209 19:17:51.904911 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:17:51.905669 kubelet[2727]: I0209 19:17:51.905623 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-kube-api-access-fpsbs" (OuterVolumeSpecName: "kube-api-access-fpsbs") pod "de63a6be-88da-4526-ade0-3a54a15b17c9" (UID: "de63a6be-88da-4526-ade0-3a54a15b17c9"). InnerVolumeSpecName "kube-api-access-fpsbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:51.986149 kubelet[2727]: I0209 19:17:51.986011 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-ipsec-secrets\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.986149 kubelet[2727]: I0209 19:17:51.986063 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-cilium-run\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.986149 kubelet[2727]: I0209 19:17:51.986090 2727 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-lib-modules\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.986149 kubelet[2727]: I0209 19:17:51.986116 2727 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fpsbs\" (UniqueName: \"kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-kube-api-access-fpsbs\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.987863 kubelet[2727]: I0209 19:17:51.987832 2727 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-hostproc\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.988040 kubelet[2727]: I0209 19:17:51.988019 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-kernel\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.988228 kubelet[2727]: I0209 19:17:51.988208 2727 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de63a6be-88da-4526-ade0-3a54a15b17c9-hubble-tls\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.988427 kubelet[2727]: I0209 19:17:51.988395 2727 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de63a6be-88da-4526-ade0-3a54a15b17c9-clustermesh-secrets\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.988572 kubelet[2727]: I0209 19:17:51.988552 2727 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-etc-cni-netd\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.988726 kubelet[2727]: I0209 19:17:51.988704 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-host-proc-sys-net\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:51.988908 kubelet[2727]: I0209 19:17:51.988888 2727 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de63a6be-88da-4526-ade0-3a54a15b17c9-xtables-lock\") on node \"ip-172-31-23-244\" DevicePath \"\"" Feb 9 19:17:52.260320 systemd[1]: Removed slice kubepods-burstable-podde63a6be_88da_4526_ade0_3a54a15b17c9.slice. Feb 9 19:17:52.381329 systemd[1]: var-lib-kubelet-pods-de63a6be\x2d88da\x2d4526\x2dade0\x2d3a54a15b17c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpsbs.mount: Deactivated successfully. Feb 9 19:17:52.381547 systemd[1]: var-lib-kubelet-pods-de63a6be\x2d88da\x2d4526\x2dade0\x2d3a54a15b17c9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:17:52.720833 kubelet[2727]: I0209 19:17:52.720744 2727 scope.go:117] "RemoveContainer" containerID="f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1" Feb 9 19:17:52.725356 env[1649]: time="2024-02-09T19:17:52.725300775Z" level=info msg="RemoveContainer for \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\"" Feb 9 19:17:52.732946 env[1649]: time="2024-02-09T19:17:52.732867479Z" level=info msg="RemoveContainer for \"f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1\" returns successfully" Feb 9 19:17:52.781493 kubelet[2727]: I0209 19:17:52.781446 2727 topology_manager.go:215] "Topology Admit Handler" podUID="7f124647-edae-4831-98d4-a06a26b620a7" podNamespace="kube-system" podName="cilium-vfgqt" Feb 9 19:17:52.781760 kubelet[2727]: E0209 19:17:52.781736 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de63a6be-88da-4526-ade0-3a54a15b17c9" containerName="mount-cgroup" Feb 9 19:17:52.781985 kubelet[2727]: I0209 19:17:52.781962 2727 memory_manager.go:346] "RemoveStaleState removing state" podUID="de63a6be-88da-4526-ade0-3a54a15b17c9" containerName="mount-cgroup" Feb 9 19:17:52.792976 systemd[1]: Created slice kubepods-burstable-pod7f124647_edae_4831_98d4_a06a26b620a7.slice. Feb 9 19:17:52.894740 kubelet[2727]: I0209 19:17:52.894701 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-xtables-lock\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.895500 kubelet[2727]: I0209 19:17:52.895472 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f124647-edae-4831-98d4-a06a26b620a7-cilium-config-path\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.895872 kubelet[2727]: I0209 19:17:52.895847 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-host-proc-sys-kernel\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.896088 kubelet[2727]: I0209 19:17:52.896066 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7f124647-edae-4831-98d4-a06a26b620a7-cilium-ipsec-secrets\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.896234 kubelet[2727]: I0209 19:17:52.896212 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28k2z\" (UniqueName: \"kubernetes.io/projected/7f124647-edae-4831-98d4-a06a26b620a7-kube-api-access-28k2z\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.896389 kubelet[2727]: I0209 19:17:52.896368 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-hostproc\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.896545 kubelet[2727]: I0209 19:17:52.896524 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f124647-edae-4831-98d4-a06a26b620a7-clustermesh-secrets\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.896699 kubelet[2727]: I0209 19:17:52.896678 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-cilium-run\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.896977 kubelet[2727]: I0209 19:17:52.896953 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-cilium-cgroup\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.897118 kubelet[2727]: I0209 19:17:52.897098 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-lib-modules\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.897325 kubelet[2727]: I0209 19:17:52.897305 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f124647-edae-4831-98d4-a06a26b620a7-hubble-tls\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.897477 kubelet[2727]: I0209 19:17:52.897456 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-bpf-maps\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.897688 kubelet[2727]: I0209 19:17:52.897667 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-cni-path\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.897842 kubelet[2727]: I0209 19:17:52.897822 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-etc-cni-netd\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:52.897989 kubelet[2727]: I0209 19:17:52.897968 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f124647-edae-4831-98d4-a06a26b620a7-host-proc-sys-net\") pod \"cilium-vfgqt\" (UID: \"7f124647-edae-4831-98d4-a06a26b620a7\") " pod="kube-system/cilium-vfgqt" Feb 9 19:17:53.099915 env[1649]: time="2024-02-09T19:17:53.099839667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfgqt,Uid:7f124647-edae-4831-98d4-a06a26b620a7,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:53.122161 env[1649]: time="2024-02-09T19:17:53.122049928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:53.122412 env[1649]: time="2024-02-09T19:17:53.122125291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:53.122412 env[1649]: time="2024-02-09T19:17:53.122152304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:53.122658 env[1649]: time="2024-02-09T19:17:53.122525891Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e pid=4633 runtime=io.containerd.runc.v2 Feb 9 19:17:53.146844 systemd[1]: Started cri-containerd-c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e.scope. Feb 9 19:17:53.201732 env[1649]: time="2024-02-09T19:17:53.201667516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfgqt,Uid:7f124647-edae-4831-98d4-a06a26b620a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\"" Feb 9 19:17:53.210933 env[1649]: time="2024-02-09T19:17:53.210878432Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:17:53.230326 env[1649]: time="2024-02-09T19:17:53.230240681Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41\"" Feb 9 19:17:53.231287 env[1649]: time="2024-02-09T19:17:53.231213559Z" level=info msg="StartContainer for \"e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41\"" Feb 9 19:17:53.264136 systemd[1]: Started cri-containerd-e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41.scope. Feb 9 19:17:53.345116 env[1649]: time="2024-02-09T19:17:53.345040071Z" level=info msg="StartContainer for \"e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41\" returns successfully" Feb 9 19:17:53.408849 systemd[1]: cri-containerd-e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41.scope: Deactivated successfully. Feb 9 19:17:53.454078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41-rootfs.mount: Deactivated successfully. Feb 9 19:17:53.477429 env[1649]: time="2024-02-09T19:17:53.477336067Z" level=info msg="shim disconnected" id=e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41 Feb 9 19:17:53.477429 env[1649]: time="2024-02-09T19:17:53.477407638Z" level=warning msg="cleaning up after shim disconnected" id=e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41 namespace=k8s.io Feb 9 19:17:53.477429 env[1649]: time="2024-02-09T19:17:53.477430055Z" level=info msg="cleaning up dead shim" Feb 9 19:17:53.500982 env[1649]: time="2024-02-09T19:17:53.500924367Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4715 runtime=io.containerd.runc.v2\n" Feb 9 19:17:53.582932 kubelet[2727]: I0209 19:17:53.582885 2727 setters.go:552] "Node became not ready" node="ip-172-31-23-244" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T19:17:53Z","lastTransitionTime":"2024-02-09T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 19:17:53.731432 env[1649]: time="2024-02-09T19:17:53.730708170Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:17:53.750668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405186651.mount: Deactivated successfully. Feb 9 19:17:53.758695 env[1649]: time="2024-02-09T19:17:53.758614612Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93\"" Feb 9 19:17:53.761620 env[1649]: time="2024-02-09T19:17:53.761549976Z" level=info msg="StartContainer for \"dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93\"" Feb 9 19:17:53.802929 systemd[1]: Started cri-containerd-dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93.scope. Feb 9 19:17:53.861276 env[1649]: time="2024-02-09T19:17:53.861212604Z" level=info msg="StartContainer for \"dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93\" returns successfully" Feb 9 19:17:53.876381 systemd[1]: cri-containerd-dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93.scope: Deactivated successfully. Feb 9 19:17:53.925154 env[1649]: time="2024-02-09T19:17:53.925086902Z" level=info msg="shim disconnected" id=dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93 Feb 9 19:17:53.925490 env[1649]: time="2024-02-09T19:17:53.925156649Z" level=warning msg="cleaning up after shim disconnected" id=dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93 namespace=k8s.io Feb 9 19:17:53.925490 env[1649]: time="2024-02-09T19:17:53.925180050Z" level=info msg="cleaning up dead shim" Feb 9 19:17:53.940650 env[1649]: time="2024-02-09T19:17:53.940577834Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4777 runtime=io.containerd.runc.v2\n" Feb 9 19:17:53.968418 kubelet[2727]: W0209 19:17:53.968333 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde63a6be_88da_4526_ade0_3a54a15b17c9.slice/cri-containerd-f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1.scope WatchSource:0}: container "f5718cfcd7bde0621d2ba225f008a333b3aed004d06f36286474e4fc3938e2d1" in namespace "k8s.io": not found Feb 9 19:17:54.252775 kubelet[2727]: I0209 19:17:54.252721 2727 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="de63a6be-88da-4526-ade0-3a54a15b17c9" path="/var/lib/kubelet/pods/de63a6be-88da-4526-ade0-3a54a15b17c9/volumes" Feb 9 19:17:54.742679 env[1649]: time="2024-02-09T19:17:54.742625288Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:17:54.778098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225002486.mount: Deactivated successfully. Feb 9 19:17:54.790510 env[1649]: time="2024-02-09T19:17:54.790317470Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779\"" Feb 9 19:17:54.791470 env[1649]: time="2024-02-09T19:17:54.791417026Z" level=info msg="StartContainer for \"929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779\"" Feb 9 19:17:54.830880 systemd[1]: Started cri-containerd-929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779.scope. Feb 9 19:17:54.908528 systemd[1]: cri-containerd-929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779.scope: Deactivated successfully. Feb 9 19:17:54.910301 env[1649]: time="2024-02-09T19:17:54.910211036Z" level=info msg="StartContainer for \"929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779\" returns successfully" Feb 9 19:17:54.959634 env[1649]: time="2024-02-09T19:17:54.959549684Z" level=info msg="shim disconnected" id=929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779 Feb 9 19:17:54.959634 env[1649]: time="2024-02-09T19:17:54.959622875Z" level=warning msg="cleaning up after shim disconnected" id=929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779 namespace=k8s.io Feb 9 19:17:54.960096 env[1649]: time="2024-02-09T19:17:54.959645088Z" level=info msg="cleaning up dead shim" Feb 9 19:17:54.974910 env[1649]: time="2024-02-09T19:17:54.974832510Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4835 runtime=io.containerd.runc.v2\n" Feb 9 19:17:55.381760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779-rootfs.mount: Deactivated successfully. Feb 9 19:17:55.436771 kubelet[2727]: E0209 19:17:55.436725 2727 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:17:55.742716 env[1649]: time="2024-02-09T19:17:55.742472281Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:17:55.777240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124671497.mount: Deactivated successfully. Feb 9 19:17:55.790716 env[1649]: time="2024-02-09T19:17:55.790631408Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401\"" Feb 9 19:17:55.791874 env[1649]: time="2024-02-09T19:17:55.791724292Z" level=info msg="StartContainer for \"ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401\"" Feb 9 19:17:55.831430 systemd[1]: Started cri-containerd-ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401.scope. Feb 9 19:17:55.890631 systemd[1]: cri-containerd-ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401.scope: Deactivated successfully. Feb 9 19:17:55.893844 env[1649]: time="2024-02-09T19:17:55.893627283Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f124647_edae_4831_98d4_a06a26b620a7.slice/cri-containerd-ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401.scope/memory.events\": no such file or directory" Feb 9 19:17:55.898752 env[1649]: time="2024-02-09T19:17:55.898690083Z" level=info msg="StartContainer for \"ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401\" returns successfully" Feb 9 19:17:55.945669 env[1649]: time="2024-02-09T19:17:55.945607220Z" level=info msg="shim disconnected" id=ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401 Feb 9 19:17:55.946170 env[1649]: time="2024-02-09T19:17:55.946125785Z" level=warning msg="cleaning up after shim disconnected" id=ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401 namespace=k8s.io Feb 9 19:17:55.946311 env[1649]: time="2024-02-09T19:17:55.946283088Z" level=info msg="cleaning up dead shim" Feb 9 19:17:55.960585 env[1649]: time="2024-02-09T19:17:55.960529905Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4893 runtime=io.containerd.runc.v2\n" Feb 9 19:17:56.381685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401-rootfs.mount: Deactivated successfully. Feb 9 19:17:56.753257 env[1649]: time="2024-02-09T19:17:56.752731952Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:17:56.793656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354999751.mount: Deactivated successfully. Feb 9 19:17:56.801507 env[1649]: time="2024-02-09T19:17:56.801437647Z" level=info msg="CreateContainer within sandbox \"c29131212e3dd32f495f6da539224cf3d75bc9d956f4a941e0ab91b2e9133f7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3e3313137d5acbeb0c2989c8560c814acef04494d6a08010bc2fde995cb78e0\"" Feb 9 19:17:56.804137 env[1649]: time="2024-02-09T19:17:56.804079519Z" level=info msg="StartContainer for \"e3e3313137d5acbeb0c2989c8560c814acef04494d6a08010bc2fde995cb78e0\"" Feb 9 19:17:56.843079 systemd[1]: Started cri-containerd-e3e3313137d5acbeb0c2989c8560c814acef04494d6a08010bc2fde995cb78e0.scope. Feb 9 19:17:56.914575 env[1649]: time="2024-02-09T19:17:56.914491865Z" level=info msg="StartContainer for \"e3e3313137d5acbeb0c2989c8560c814acef04494d6a08010bc2fde995cb78e0\" returns successfully" Feb 9 19:17:57.087625 kubelet[2727]: W0209 19:17:57.087563 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f124647_edae_4831_98d4_a06a26b620a7.slice/cri-containerd-e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41.scope WatchSource:0}: task e2a94b6dbe742f779d78857786c22cc0fcb8d4caca4d2076068c076049109c41 not found: not found Feb 9 19:17:57.248055 kubelet[2727]: E0209 19:17:57.247987 2727 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-q7dxc" podUID="de1930ed-0072-4756-b109-5b94cee67150" Feb 9 19:17:57.764861 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 19:17:59.247314 kubelet[2727]: E0209 19:17:59.247250 2727 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-q7dxc" podUID="de1930ed-0072-4756-b109-5b94cee67150" Feb 9 19:18:00.205921 kubelet[2727]: W0209 19:18:00.205759 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f124647_edae_4831_98d4_a06a26b620a7.slice/cri-containerd-dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93.scope WatchSource:0}: task dce5463a0545e1aa1d0dc59f70d842a0420dd63d07913f7d81b506aab3295c93 not found: not found Feb 9 19:18:01.756433 systemd-networkd[1449]: lxc_health: Link UP Feb 9 19:18:01.762958 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:18:01.759425 systemd-networkd[1449]: lxc_health: Gained carrier Feb 9 19:18:01.773757 (udev-worker)[5441]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:02.162206 systemd[1]: run-containerd-runc-k8s.io-e3e3313137d5acbeb0c2989c8560c814acef04494d6a08010bc2fde995cb78e0-runc.6aHygV.mount: Deactivated successfully. Feb 9 19:18:03.142447 kubelet[2727]: I0209 19:18:03.142376 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vfgqt" podStartSLOduration=11.142322844 podCreationTimestamp="2024-02-09 19:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:17:57.781577225 +0000 UTC m=+127.847711781" watchObservedRunningTime="2024-02-09 19:18:03.142322844 +0000 UTC m=+133.208457364" Feb 9 19:18:03.324945 kubelet[2727]: W0209 19:18:03.324871 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f124647_edae_4831_98d4_a06a26b620a7.slice/cri-containerd-929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779.scope WatchSource:0}: task 929f5b7afed4f6f66497c5030014570b0d1aa9c2981d6933cfd9855025b49779 not found: not found Feb 9 19:18:03.680600 systemd-networkd[1449]: lxc_health: Gained IPv6LL Feb 9 19:18:04.504268 systemd[1]: run-containerd-runc-k8s.io-e3e3313137d5acbeb0c2989c8560c814acef04494d6a08010bc2fde995cb78e0-runc.m24Msm.mount: Deactivated successfully. Feb 9 19:18:06.433707 kubelet[2727]: W0209 19:18:06.433635 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f124647_edae_4831_98d4_a06a26b620a7.slice/cri-containerd-ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401.scope WatchSource:0}: task ba5f52c6c5f14af5beb016a957216321cdb28fc0383d0e93ae3031e148d96401 not found: not found Feb 9 19:18:06.834699 systemd[1]: run-containerd-runc-k8s.io-e3e3313137d5acbeb0c2989c8560c814acef04494d6a08010bc2fde995cb78e0-runc.oUBssF.mount: Deactivated successfully. Feb 9 19:18:07.040944 sshd[4570]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:07.046695 systemd[1]: sshd@26-172.31.23.244:22-147.75.109.163:49950.service: Deactivated successfully. Feb 9 19:18:07.048205 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:18:07.051291 systemd-logind[1638]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:18:07.054601 systemd-logind[1638]: Removed session 27. Feb 9 19:18:21.161395 systemd[1]: cri-containerd-0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368.scope: Deactivated successfully. Feb 9 19:18:21.161948 systemd[1]: cri-containerd-0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368.scope: Consumed 5.470s CPU time. Feb 9 19:18:21.202278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368-rootfs.mount: Deactivated successfully. Feb 9 19:18:21.223838 env[1649]: time="2024-02-09T19:18:21.223728619Z" level=info msg="shim disconnected" id=0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368 Feb 9 19:18:21.224607 env[1649]: time="2024-02-09T19:18:21.224563186Z" level=warning msg="cleaning up after shim disconnected" id=0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368 namespace=k8s.io Feb 9 19:18:21.224731 env[1649]: time="2024-02-09T19:18:21.224702729Z" level=info msg="cleaning up dead shim" Feb 9 19:18:21.240163 env[1649]: time="2024-02-09T19:18:21.240092455Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5555 runtime=io.containerd.runc.v2\n" Feb 9 19:18:21.823232 kubelet[2727]: I0209 19:18:21.823199 2727 scope.go:117] "RemoveContainer" containerID="0fb4c69a747d8248eac3e195c019acbcd4affb451f43bd8819cdcf8f630f8368" Feb 9 19:18:21.828417 env[1649]: time="2024-02-09T19:18:21.828322932Z" level=info msg="CreateContainer within sandbox \"29b37ee9c503f013d4e8a1e1329ac3d9c2b00be3e335ad18f662bc010b13fa46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:18:21.852291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183940470.mount: Deactivated successfully. Feb 9 19:18:21.863840 env[1649]: time="2024-02-09T19:18:21.863751161Z" level=info msg="CreateContainer within sandbox \"29b37ee9c503f013d4e8a1e1329ac3d9c2b00be3e335ad18f662bc010b13fa46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"93be932030c4448bc706a10f2f1a5918f8b2ed25d11238fe36a4a841bd44de87\"" Feb 9 19:18:21.865058 env[1649]: time="2024-02-09T19:18:21.865007644Z" level=info msg="StartContainer for \"93be932030c4448bc706a10f2f1a5918f8b2ed25d11238fe36a4a841bd44de87\"" Feb 9 19:18:21.895976 systemd[1]: Started cri-containerd-93be932030c4448bc706a10f2f1a5918f8b2ed25d11238fe36a4a841bd44de87.scope. Feb 9 19:18:21.990238 env[1649]: time="2024-02-09T19:18:21.990117182Z" level=info msg="StartContainer for \"93be932030c4448bc706a10f2f1a5918f8b2ed25d11238fe36a4a841bd44de87\" returns successfully" Feb 9 19:18:22.860565 kubelet[2727]: E0209 19:18:22.860526 2727 request.go:1116] Unexpected error when reading response body: context deadline exceeded Feb 9 19:18:22.861369 kubelet[2727]: E0209 19:18:22.861339 2727 controller.go:193] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: context deadline exceeded" Feb 9 19:18:25.935969 systemd[1]: cri-containerd-4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873.scope: Deactivated successfully. Feb 9 19:18:25.936504 systemd[1]: cri-containerd-4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873.scope: Consumed 3.028s CPU time. Feb 9 19:18:25.976908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873-rootfs.mount: Deactivated successfully. Feb 9 19:18:25.991269 env[1649]: time="2024-02-09T19:18:25.991196763Z" level=info msg="shim disconnected" id=4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873 Feb 9 19:18:25.992027 env[1649]: time="2024-02-09T19:18:25.991269222Z" level=warning msg="cleaning up after shim disconnected" id=4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873 namespace=k8s.io Feb 9 19:18:25.992027 env[1649]: time="2024-02-09T19:18:25.991292143Z" level=info msg="cleaning up dead shim" Feb 9 19:18:26.004967 env[1649]: time="2024-02-09T19:18:26.004880593Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5615 runtime=io.containerd.runc.v2\n" Feb 9 19:18:26.840614 kubelet[2727]: I0209 19:18:26.840574 2727 scope.go:117] "RemoveContainer" containerID="4ab9146fdcbdd703d71c21da400728e6245a3366056b5f2c0481dd11102ac873" Feb 9 19:18:26.844588 env[1649]: time="2024-02-09T19:18:26.844516979Z" level=info msg="CreateContainer within sandbox \"3c984148ee2c585f5ebb8d6a5c4750ee1fa3220f6e710f197e14271cc695fbe7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:18:26.867988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162633805.mount: Deactivated successfully. Feb 9 19:18:26.876974 env[1649]: time="2024-02-09T19:18:26.876880601Z" level=info msg="CreateContainer within sandbox \"3c984148ee2c585f5ebb8d6a5c4750ee1fa3220f6e710f197e14271cc695fbe7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9f7b3553715ecb39367fa846e57158451346df46a722228dcb3020b9c2ae32f7\"" Feb 9 19:18:26.877837 env[1649]: time="2024-02-09T19:18:26.877746706Z" level=info msg="StartContainer for \"9f7b3553715ecb39367fa846e57158451346df46a722228dcb3020b9c2ae32f7\"" Feb 9 19:18:26.918078 systemd[1]: Started cri-containerd-9f7b3553715ecb39367fa846e57158451346df46a722228dcb3020b9c2ae32f7.scope. Feb 9 19:18:27.003001 env[1649]: time="2024-02-09T19:18:27.002937295Z" level=info msg="StartContainer for \"9f7b3553715ecb39367fa846e57158451346df46a722228dcb3020b9c2ae32f7\" returns successfully" Feb 9 19:18:32.861694 kubelet[2727]: E0209 19:18:32.861651 2727 controller.go:193] "Failed to update lease" err="Put \"https://172.31.23.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-244?timeout=10s\": context deadline exceeded"