Feb 9 09:45:13.952456 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 09:45:13.952492 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:45:13.952516 kernel: efi: EFI v2.70 by EDK II Feb 9 09:45:13.952531 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 09:45:13.952545 kernel: ACPI: Early table checksum verification disabled Feb 9 09:45:13.952559 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 09:45:13.952575 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 09:45:13.952589 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 09:45:13.952603 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 09:45:13.952617 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 09:45:13.952635 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 09:45:13.952649 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 09:45:13.952663 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 09:45:13.952677 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 09:45:13.952693 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 09:45:13.952713 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 09:45:13.952727 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 09:45:13.952742 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 09:45:13.952756 kernel: printk: bootconsole [uart0] enabled Feb 9 09:45:13.952771 kernel: NUMA: Failed to initialise from firmware Feb 9 09:45:13.952786 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:45:13.952801 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 09:45:13.952815 kernel: Zone ranges: Feb 9 09:45:13.952830 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 09:45:13.952844 kernel: DMA32 empty Feb 9 09:45:13.952859 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 09:45:13.952877 kernel: Movable zone start for each node Feb 9 09:45:13.952891 kernel: Early memory node ranges Feb 9 09:45:13.952906 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 09:45:13.952921 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 09:45:13.952935 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 09:45:13.952949 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 09:45:13.952964 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 09:45:13.952978 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 09:45:13.952993 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:45:13.953007 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 09:45:13.953022 kernel: psci: probing for conduit method from ACPI. Feb 9 09:45:13.953036 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 09:45:13.953055 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:45:13.953070 kernel: psci: Trusted OS migration not required Feb 9 09:45:13.953108 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:45:13.953126 kernel: ACPI: SRAT not present Feb 9 09:45:13.953142 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:45:13.953163 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:45:13.953179 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:45:13.953194 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:45:13.953209 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:45:13.953225 kernel: CPU features: detected: Spectre-v2 Feb 9 09:45:13.953240 kernel: CPU features: detected: Spectre-v3a Feb 9 09:45:13.953255 kernel: CPU features: detected: Spectre-BHB Feb 9 09:45:13.953271 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:45:13.953286 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:45:13.953302 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 09:45:13.953317 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 09:45:13.953336 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 09:45:13.953453 kernel: Policy zone: Normal Feb 9 09:45:13.953473 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:45:13.953489 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:45:13.953505 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:45:13.953520 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:45:13.953536 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:45:13.953551 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 09:45:13.953567 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 09:45:13.953583 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:45:13.953603 kernel: trace event string verifier disabled Feb 9 09:45:13.953619 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:45:13.953635 kernel: rcu: RCU event tracing is enabled. Feb 9 09:45:13.953651 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:45:13.953666 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:45:13.953682 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:45:13.953697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:45:13.953713 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:45:13.953728 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:45:13.953743 kernel: GICv3: 96 SPIs implemented Feb 9 09:45:13.953758 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:45:13.953773 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:45:13.953792 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:45:13.953807 kernel: GICv3: 16 PPIs implemented Feb 9 09:45:13.953822 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 09:45:13.953837 kernel: ACPI: SRAT not present Feb 9 09:45:13.953852 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 09:45:13.953867 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:45:13.953883 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:45:13.953898 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 09:45:13.953914 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 09:45:13.953929 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 09:45:13.953944 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 09:45:13.953964 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 09:45:13.953979 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 09:45:13.953995 kernel: Console: colour dummy device 80x25 Feb 9 09:45:13.954010 kernel: printk: console [tty1] enabled Feb 9 09:45:13.954026 kernel: ACPI: Core revision 20210730 Feb 9 09:45:13.954042 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 09:45:13.954058 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:45:13.954074 kernel: LSM: Security Framework initializing Feb 9 09:45:13.954089 kernel: SELinux: Initializing. Feb 9 09:45:13.954105 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.954125 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.954141 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:45:13.954156 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 09:45:13.954172 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 09:45:13.954187 kernel: Remapping and enabling EFI services. Feb 9 09:45:13.954203 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:45:13.954218 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:45:13.954234 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 09:45:13.954250 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 09:45:13.954269 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 09:45:13.954285 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:45:13.954300 kernel: SMP: Total of 2 processors activated. Feb 9 09:45:13.954316 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:45:13.954331 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 09:45:13.954364 kernel: CPU features: detected: CRC32 instructions Feb 9 09:45:13.954383 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:45:13.954399 kernel: alternatives: patching kernel code Feb 9 09:45:13.954415 kernel: devtmpfs: initialized Feb 9 09:45:13.954435 kernel: KASLR disabled due to lack of seed Feb 9 09:45:13.954452 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:45:13.954469 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:45:13.954495 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:45:13.954515 kernel: SMBIOS 3.0.0 present. Feb 9 09:45:13.954531 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 09:45:13.954548 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:45:13.954564 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:45:13.954581 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:45:13.954598 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:45:13.954614 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:45:13.954631 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Feb 9 09:45:13.954651 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:45:13.954668 kernel: cpuidle: using governor menu Feb 9 09:45:13.954684 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:45:13.954701 kernel: ASID allocator initialised with 32768 entries Feb 9 09:45:13.954717 kernel: ACPI: bus type PCI registered Feb 9 09:45:13.954738 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:45:13.954754 kernel: Serial: AMBA PL011 UART driver Feb 9 09:45:13.954771 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:45:13.954787 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:45:13.954804 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:45:13.954820 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:45:13.954836 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:45:13.954853 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:45:13.954869 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:45:13.954889 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:45:13.954905 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:45:13.954921 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:45:13.954938 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:45:13.954954 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:45:13.954970 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:45:13.954987 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:45:13.955003 kernel: ACPI: Interpreter enabled Feb 9 09:45:13.955019 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:45:13.955040 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:45:13.955056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 09:45:13.955362 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:45:13.955581 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:45:13.955779 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:45:13.955975 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 09:45:13.956171 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 09:45:13.956200 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 09:45:13.956217 kernel: acpiphp: Slot [1] registered Feb 9 09:45:13.956234 kernel: acpiphp: Slot [2] registered Feb 9 09:45:13.956250 kernel: acpiphp: Slot [3] registered Feb 9 09:45:13.956266 kernel: acpiphp: Slot [4] registered Feb 9 09:45:13.956283 kernel: acpiphp: Slot [5] registered Feb 9 09:45:13.956299 kernel: acpiphp: Slot [6] registered Feb 9 09:45:13.956315 kernel: acpiphp: Slot [7] registered Feb 9 09:45:13.956332 kernel: acpiphp: Slot [8] registered Feb 9 09:45:13.959450 kernel: acpiphp: Slot [9] registered Feb 9 09:45:13.959478 kernel: acpiphp: Slot [10] registered Feb 9 09:45:13.959496 kernel: acpiphp: Slot [11] registered Feb 9 09:45:13.959513 kernel: acpiphp: Slot [12] registered Feb 9 09:45:13.959529 kernel: acpiphp: Slot [13] registered Feb 9 09:45:13.959546 kernel: acpiphp: Slot [14] registered Feb 9 09:45:13.959562 kernel: acpiphp: Slot [15] registered Feb 9 09:45:13.959579 kernel: acpiphp: Slot [16] registered Feb 9 09:45:13.959595 kernel: acpiphp: Slot [17] registered Feb 9 09:45:13.959611 kernel: acpiphp: Slot [18] registered Feb 9 09:45:13.959637 kernel: acpiphp: Slot [19] registered Feb 9 09:45:13.959653 kernel: acpiphp: Slot [20] registered Feb 9 09:45:13.959670 kernel: acpiphp: Slot [21] registered Feb 9 09:45:13.959686 kernel: acpiphp: Slot [22] registered Feb 9 09:45:13.959702 kernel: acpiphp: Slot [23] registered Feb 9 09:45:13.959718 kernel: acpiphp: Slot [24] registered Feb 9 09:45:13.959735 kernel: acpiphp: Slot [25] registered Feb 9 09:45:13.959751 kernel: acpiphp: Slot [26] registered Feb 9 09:45:13.959768 kernel: acpiphp: Slot [27] registered Feb 9 09:45:13.959788 kernel: acpiphp: Slot [28] registered Feb 9 09:45:13.959805 kernel: acpiphp: Slot [29] registered Feb 9 09:45:13.959821 kernel: acpiphp: Slot [30] registered Feb 9 09:45:13.959837 kernel: acpiphp: Slot [31] registered Feb 9 09:45:13.959853 kernel: PCI host bridge to bus 0000:00 Feb 9 09:45:13.960127 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 09:45:13.960319 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:45:13.962609 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 09:45:13.962813 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 09:45:13.963043 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 09:45:13.963258 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 09:45:13.963497 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 09:45:13.963711 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 09:45:13.963911 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 09:45:13.964114 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:45:13.964360 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 09:45:13.964577 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 09:45:13.964777 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 09:45:13.964978 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 09:45:13.965208 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:45:13.965446 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 09:45:13.965658 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 09:45:13.965861 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 09:45:13.966063 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 09:45:13.966267 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 09:45:13.969526 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 09:45:13.969730 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:45:13.969918 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 09:45:13.969951 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:45:13.969969 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:45:13.969986 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:45:13.970003 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:45:13.970020 kernel: iommu: Default domain type: Translated Feb 9 09:45:13.970036 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:45:13.970052 kernel: vgaarb: loaded Feb 9 09:45:13.970069 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:45:13.970086 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:45:13.970107 kernel: PTP clock support registered Feb 9 09:45:13.970124 kernel: Registered efivars operations Feb 9 09:45:13.970140 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:45:13.970157 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:45:13.970174 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:45:13.970190 kernel: pnp: PnP ACPI init Feb 9 09:45:13.973764 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 09:45:13.973799 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:45:13.973818 kernel: NET: Registered PF_INET protocol family Feb 9 09:45:13.973843 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:45:13.973860 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:45:13.973877 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:45:13.973894 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:45:13.973911 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:45:13.973928 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:45:13.973944 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.973961 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:45:13.973977 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:45:13.973998 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:45:13.974015 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 09:45:13.974031 kernel: kvm [1]: HYP mode not available Feb 9 09:45:13.974048 kernel: Initialise system trusted keyrings Feb 9 09:45:13.974065 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:45:13.974082 kernel: Key type asymmetric registered Feb 9 09:45:13.974099 kernel: Asymmetric key parser 'x509' registered Feb 9 09:45:13.974115 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:45:13.974132 kernel: io scheduler mq-deadline registered Feb 9 09:45:13.974152 kernel: io scheduler kyber registered Feb 9 09:45:13.974169 kernel: io scheduler bfq registered Feb 9 09:45:13.981747 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 09:45:13.981781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:45:13.981798 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:45:13.981815 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:45:13.981833 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 09:45:13.982036 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 09:45:13.982067 kernel: printk: console [ttyS0] disabled Feb 9 09:45:13.982085 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 09:45:13.982101 kernel: printk: console [ttyS0] enabled Feb 9 09:45:13.982118 kernel: printk: bootconsole [uart0] disabled Feb 9 09:45:13.982134 kernel: thunder_xcv, ver 1.0 Feb 9 09:45:13.982151 kernel: thunder_bgx, ver 1.0 Feb 9 09:45:13.982167 kernel: nicpf, ver 1.0 Feb 9 09:45:13.982184 kernel: nicvf, ver 1.0 Feb 9 09:45:13.982406 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:45:13.982602 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:45:13 UTC (1707471913) Feb 9 09:45:13.982626 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:45:13.982643 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:45:13.982660 kernel: Segment Routing with IPv6 Feb 9 09:45:13.982676 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:45:13.982692 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:45:13.982709 kernel: Key type dns_resolver registered Feb 9 09:45:13.982725 kernel: registered taskstats version 1 Feb 9 09:45:13.982746 kernel: Loading compiled-in X.509 certificates Feb 9 09:45:13.982763 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:45:13.982779 kernel: Key type .fscrypt registered Feb 9 09:45:13.982795 kernel: Key type fscrypt-provisioning registered Feb 9 09:45:13.982812 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:45:13.982828 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:45:13.982844 kernel: ima: No architecture policies found Feb 9 09:45:13.982860 kernel: Freeing unused kernel memory: 34688K Feb 9 09:45:13.982877 kernel: Run /init as init process Feb 9 09:45:13.982897 kernel: with arguments: Feb 9 09:45:13.982913 kernel: /init Feb 9 09:45:13.982929 kernel: with environment: Feb 9 09:45:13.982944 kernel: HOME=/ Feb 9 09:45:13.982961 kernel: TERM=linux Feb 9 09:45:13.982977 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:45:13.982998 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:45:13.983018 systemd[1]: Detected virtualization amazon. Feb 9 09:45:13.983040 systemd[1]: Detected architecture arm64. Feb 9 09:45:13.983057 systemd[1]: Running in initrd. Feb 9 09:45:13.983074 systemd[1]: No hostname configured, using default hostname. Feb 9 09:45:13.983091 systemd[1]: Hostname set to . Feb 9 09:45:13.983110 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:45:13.983127 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:45:13.983145 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:45:13.983162 systemd[1]: Reached target cryptsetup.target. Feb 9 09:45:13.983184 systemd[1]: Reached target paths.target. Feb 9 09:45:13.983201 systemd[1]: Reached target slices.target. Feb 9 09:45:13.983219 systemd[1]: Reached target swap.target. Feb 9 09:45:13.983236 systemd[1]: Reached target timers.target. Feb 9 09:45:13.983254 systemd[1]: Listening on iscsid.socket. Feb 9 09:45:13.983272 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:45:13.983289 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:45:13.983307 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:45:13.983329 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:45:13.983364 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:45:13.983385 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:45:13.983404 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:45:13.983422 systemd[1]: Reached target sockets.target. Feb 9 09:45:13.983440 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:45:13.983458 systemd[1]: Finished network-cleanup.service. Feb 9 09:45:13.983475 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:45:13.983493 systemd[1]: Starting systemd-journald.service... Feb 9 09:45:13.983517 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:45:13.983535 systemd[1]: Starting systemd-resolved.service... Feb 9 09:45:13.983552 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:45:13.983570 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:45:13.983588 kernel: audit: type=1130 audit(1707471913.972:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:13.983606 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:45:13.983627 systemd-journald[308]: Journal started Feb 9 09:45:13.983714 systemd-journald[308]: Runtime Journal (/run/log/journal/ec22c5c4f10fb54d0d1ffa2ebae01ad5) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:45:13.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:13.956441 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 09:45:14.010413 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:45:14.010472 kernel: audit: type=1130 audit(1707471914.001:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.010498 systemd[1]: Started systemd-journald.service. Feb 9 09:45:14.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.014038 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 09:45:14.018069 kernel: Bridge firewalling registered Feb 9 09:45:14.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.030295 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:45:14.047061 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:45:14.062928 kernel: audit: type=1130 audit(1707471914.028:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.062966 kernel: audit: type=1130 audit(1707471914.044:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.062990 kernel: SCSI subsystem initialized Feb 9 09:45:14.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.064273 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:45:14.085870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:45:14.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.096767 systemd-resolved[310]: Positive Trust Anchors: Feb 9 09:45:14.096781 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:45:14.096833 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:45:14.135363 kernel: audit: type=1130 audit(1707471914.089:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.144154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:45:14.144217 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:45:14.151381 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:45:14.158887 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 09:45:14.163033 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:45:14.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.170878 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:45:14.181099 kernel: audit: type=1130 audit(1707471914.169:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.189387 kernel: audit: type=1130 audit(1707471914.180:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.189662 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:45:14.198443 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:45:14.221757 dracut-cmdline[328]: dracut-dracut-053 Feb 9 09:45:14.225220 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:45:14.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.236382 kernel: audit: type=1130 audit(1707471914.225:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.238676 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:45:14.362380 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:45:14.376385 kernel: iscsi: registered transport (tcp) Feb 9 09:45:14.400426 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:45:14.400507 kernel: QLogic iSCSI HBA Driver Feb 9 09:45:14.594893 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 09:45:14.597483 kernel: random: crng init done Feb 9 09:45:14.598609 systemd[1]: Started systemd-resolved.service. Feb 9 09:45:14.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.600244 systemd[1]: Reached target nss-lookup.target. Feb 9 09:45:14.612256 kernel: audit: type=1130 audit(1707471914.597:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.627631 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:45:14.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:14.632172 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:45:14.697386 kernel: raid6: neonx8 gen() 6420 MB/s Feb 9 09:45:14.715375 kernel: raid6: neonx8 xor() 4694 MB/s Feb 9 09:45:14.733374 kernel: raid6: neonx4 gen() 6628 MB/s Feb 9 09:45:14.751375 kernel: raid6: neonx4 xor() 4888 MB/s Feb 9 09:45:14.769374 kernel: raid6: neonx2 gen() 5822 MB/s Feb 9 09:45:14.787375 kernel: raid6: neonx2 xor() 4481 MB/s Feb 9 09:45:14.805374 kernel: raid6: neonx1 gen() 4521 MB/s Feb 9 09:45:14.823374 kernel: raid6: neonx1 xor() 3659 MB/s Feb 9 09:45:14.841374 kernel: raid6: int64x8 gen() 3424 MB/s Feb 9 09:45:14.859374 kernel: raid6: int64x8 xor() 2079 MB/s Feb 9 09:45:14.877374 kernel: raid6: int64x4 gen() 3859 MB/s Feb 9 09:45:14.895375 kernel: raid6: int64x4 xor() 2189 MB/s Feb 9 09:45:14.913374 kernel: raid6: int64x2 gen() 3625 MB/s Feb 9 09:45:14.931374 kernel: raid6: int64x2 xor() 1943 MB/s Feb 9 09:45:14.949374 kernel: raid6: int64x1 gen() 2772 MB/s Feb 9 09:45:14.968856 kernel: raid6: int64x1 xor() 1449 MB/s Feb 9 09:45:14.968885 kernel: raid6: using algorithm neonx4 gen() 6628 MB/s Feb 9 09:45:14.968908 kernel: raid6: .... xor() 4888 MB/s, rmw enabled Feb 9 09:45:14.970673 kernel: raid6: using neon recovery algorithm Feb 9 09:45:14.989381 kernel: xor: measuring software checksum speed Feb 9 09:45:14.991374 kernel: 8regs : 9285 MB/sec Feb 9 09:45:14.994375 kernel: 32regs : 11107 MB/sec Feb 9 09:45:14.998250 kernel: arm64_neon : 9614 MB/sec Feb 9 09:45:14.998281 kernel: xor: using function: 32regs (11107 MB/sec) Feb 9 09:45:15.088388 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:45:15.105201 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:45:15.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.108000 audit: BPF prog-id=7 op=LOAD Feb 9 09:45:15.108000 audit: BPF prog-id=8 op=LOAD Feb 9 09:45:15.110002 systemd[1]: Starting systemd-udevd.service... Feb 9 09:45:15.138291 systemd-udevd[509]: Using default interface naming scheme 'v252'. Feb 9 09:45:15.149254 systemd[1]: Started systemd-udevd.service. Feb 9 09:45:15.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.152554 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:45:15.187918 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Feb 9 09:45:15.248015 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:45:15.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.252473 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:45:15.360129 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:45:15.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:15.479928 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:45:15.479992 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 09:45:15.489503 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 09:45:15.489802 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 09:45:15.499370 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:84:69:39:9a:7b Feb 9 09:45:15.503559 (udev-worker)[558]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:45:15.506307 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 09:45:15.506339 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 09:45:15.515384 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 09:45:15.521415 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:45:15.521482 kernel: GPT:9289727 != 16777215 Feb 9 09:45:15.521505 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:45:15.523577 kernel: GPT:9289727 != 16777215 Feb 9 09:45:15.524844 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:45:15.530562 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:15.592393 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (571) Feb 9 09:45:15.609943 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:45:15.693476 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:45:15.714084 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:45:15.714576 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:45:15.734011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:45:15.738661 systemd[1]: Starting disk-uuid.service... Feb 9 09:45:15.751140 disk-uuid[670]: Primary Header is updated. Feb 9 09:45:15.751140 disk-uuid[670]: Secondary Entries is updated. Feb 9 09:45:15.751140 disk-uuid[670]: Secondary Header is updated. Feb 9 09:45:15.761371 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:15.769367 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:15.777379 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:16.778750 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:45:16.778826 disk-uuid[671]: The operation has completed successfully. Feb 9 09:45:16.929006 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:45:16.931270 systemd[1]: Finished disk-uuid.service. Feb 9 09:45:16.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:16.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:16.963772 systemd[1]: Starting verity-setup.service... Feb 9 09:45:16.996379 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:45:17.076686 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:45:17.080651 systemd[1]: Finished verity-setup.service. Feb 9 09:45:17.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.085422 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:45:17.168990 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:45:17.168873 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:45:17.170697 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:45:17.176252 systemd[1]: Starting ignition-setup.service... Feb 9 09:45:17.192616 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:45:17.204718 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:17.204758 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:45:17.204782 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:45:17.214424 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:45:17.230959 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:45:17.264096 systemd[1]: Finished ignition-setup.service. Feb 9 09:45:17.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.267750 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:45:17.337504 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:45:17.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.340000 audit: BPF prog-id=9 op=LOAD Feb 9 09:45:17.343075 systemd[1]: Starting systemd-networkd.service... Feb 9 09:45:17.390581 systemd-networkd[1194]: lo: Link UP Feb 9 09:45:17.390604 systemd-networkd[1194]: lo: Gained carrier Feb 9 09:45:17.394754 systemd-networkd[1194]: Enumeration completed Feb 9 09:45:17.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.394936 systemd[1]: Started systemd-networkd.service. Feb 9 09:45:17.396622 systemd-networkd[1194]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:45:17.397198 systemd[1]: Reached target network.target. Feb 9 09:45:17.404340 systemd-networkd[1194]: eth0: Link UP Feb 9 09:45:17.404377 systemd-networkd[1194]: eth0: Gained carrier Feb 9 09:45:17.404708 systemd[1]: Starting iscsiuio.service... Feb 9 09:45:17.422182 systemd[1]: Started iscsiuio.service. Feb 9 09:45:17.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.427470 systemd-networkd[1194]: eth0: DHCPv4 address 172.31.20.254/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:45:17.428237 systemd[1]: Starting iscsid.service... Feb 9 09:45:17.439675 iscsid[1199]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:45:17.439675 iscsid[1199]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:45:17.439675 iscsid[1199]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:45:17.439675 iscsid[1199]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:45:17.439675 iscsid[1199]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:45:17.459446 iscsid[1199]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:45:17.465185 systemd[1]: Started iscsid.service. Feb 9 09:45:17.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.468484 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:45:17.490980 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:45:17.494358 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:45:17.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.496189 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:45:17.499555 systemd[1]: Reached target remote-fs.target. Feb 9 09:45:17.505295 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:45:17.526854 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:45:17.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.952530 ignition[1134]: Ignition 2.14.0 Feb 9 09:45:17.952559 ignition[1134]: Stage: fetch-offline Feb 9 09:45:17.952864 ignition[1134]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:17.952924 ignition[1134]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:17.971231 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:17.973891 ignition[1134]: Ignition finished successfully Feb 9 09:45:17.977122 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:45:18.000876 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 09:45:18.002203 kernel: audit: type=1130 audit(1707471917.977:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:17.980979 systemd[1]: Starting ignition-fetch.service... Feb 9 09:45:17.995931 ignition[1218]: Ignition 2.14.0 Feb 9 09:45:18.017309 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:17.995947 ignition[1218]: Stage: fetch Feb 9 09:45:17.996367 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:17.996448 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.016686 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.031774 ignition[1218]: INFO : PUT result: OK Feb 9 09:45:18.034820 ignition[1218]: DEBUG : parsed url from cmdline: "" Feb 9 09:45:18.036606 ignition[1218]: INFO : no config URL provided Feb 9 09:45:18.038202 ignition[1218]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:45:18.040564 ignition[1218]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 09:45:18.042568 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.045484 ignition[1218]: INFO : PUT result: OK Feb 9 09:45:18.046926 ignition[1218]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 09:45:18.049940 ignition[1218]: INFO : GET result: OK Feb 9 09:45:18.051556 ignition[1218]: DEBUG : parsing config with SHA512: 2f7b1b7bf3904ae6cc1c9d44d163fc2be772d5b8ed5268776cd4d2f8b382ce3654c1bb07f4eca48bfc5ef1cbe3acf45914e94bb32b4d1baad4c7337501594ad9 Feb 9 09:45:18.112256 unknown[1218]: fetched base config from "system" Feb 9 09:45:18.112536 unknown[1218]: fetched base config from "system" Feb 9 09:45:18.114086 ignition[1218]: fetch: fetch complete Feb 9 09:45:18.112555 unknown[1218]: fetched user config from "aws" Feb 9 09:45:18.114099 ignition[1218]: fetch: fetch passed Feb 9 09:45:18.114210 ignition[1218]: Ignition finished successfully Feb 9 09:45:18.124504 systemd[1]: Finished ignition-fetch.service. Feb 9 09:45:18.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.135304 systemd[1]: Starting ignition-kargs.service... Feb 9 09:45:18.138593 kernel: audit: type=1130 audit(1707471918.125:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.151673 ignition[1224]: Ignition 2.14.0 Feb 9 09:45:18.153365 ignition[1224]: Stage: kargs Feb 9 09:45:18.154842 ignition[1224]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.157081 ignition[1224]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.167506 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.170004 ignition[1224]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.173167 ignition[1224]: INFO : PUT result: OK Feb 9 09:45:18.178775 ignition[1224]: kargs: kargs passed Feb 9 09:45:18.181228 ignition[1224]: Ignition finished successfully Feb 9 09:45:18.184452 systemd[1]: Finished ignition-kargs.service. Feb 9 09:45:18.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.188852 systemd[1]: Starting ignition-disks.service... Feb 9 09:45:18.197740 kernel: audit: type=1130 audit(1707471918.186:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.204887 ignition[1230]: Ignition 2.14.0 Feb 9 09:45:18.204916 ignition[1230]: Stage: disks Feb 9 09:45:18.205253 ignition[1230]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.205312 ignition[1230]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.220503 ignition[1230]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.222771 ignition[1230]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.225713 ignition[1230]: INFO : PUT result: OK Feb 9 09:45:18.231166 ignition[1230]: disks: disks passed Feb 9 09:45:18.231471 ignition[1230]: Ignition finished successfully Feb 9 09:45:18.235821 systemd[1]: Finished ignition-disks.service. Feb 9 09:45:18.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.239037 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:45:18.267091 kernel: audit: type=1130 audit(1707471918.237:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.247829 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:45:18.249476 systemd[1]: Reached target local-fs.target. Feb 9 09:45:18.251063 systemd[1]: Reached target sysinit.target. Feb 9 09:45:18.252607 systemd[1]: Reached target basic.target. Feb 9 09:45:18.255520 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:45:18.323494 systemd-fsck[1238]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:45:18.332251 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:45:18.344850 kernel: audit: type=1130 audit(1707471918.333:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.335952 systemd[1]: Mounting sysroot.mount... Feb 9 09:45:18.359376 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:45:18.360028 systemd[1]: Mounted sysroot.mount. Feb 9 09:45:18.361777 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:45:18.376303 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:45:18.382146 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:45:18.382227 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:45:18.382285 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:45:18.398833 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:45:18.412570 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:45:18.417467 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:45:18.428479 initrd-setup-root[1260]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:45:18.439466 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1255) Feb 9 09:45:18.445064 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:18.445112 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:45:18.445139 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:45:18.450188 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:45:18.458561 initrd-setup-root[1293]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:45:18.465384 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:45:18.471078 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:45:18.474588 initrd-setup-root[1302]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:45:18.681131 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:45:18.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.685595 systemd[1]: Starting ignition-mount.service... Feb 9 09:45:18.694910 kernel: audit: type=1130 audit(1707471918.683:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.695969 systemd[1]: Starting sysroot-boot.service... Feb 9 09:45:18.705915 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:45:18.706086 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:45:18.736839 ignition[1321]: INFO : Ignition 2.14.0 Feb 9 09:45:18.741449 ignition[1321]: INFO : Stage: mount Feb 9 09:45:18.741449 ignition[1321]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.741449 ignition[1321]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.758627 kernel: audit: type=1130 audit(1707471918.744:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.741255 systemd[1]: Finished sysroot-boot.service. Feb 9 09:45:18.773467 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.775976 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.779039 ignition[1321]: INFO : PUT result: OK Feb 9 09:45:18.784665 ignition[1321]: INFO : mount: mount passed Feb 9 09:45:18.786277 ignition[1321]: INFO : Ignition finished successfully Feb 9 09:45:18.789364 systemd[1]: Finished ignition-mount.service. Feb 9 09:45:18.800115 kernel: audit: type=1130 audit(1707471918.790:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:18.793687 systemd[1]: Starting ignition-files.service... Feb 9 09:45:18.811680 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:45:18.828438 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1330) Feb 9 09:45:18.833833 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:45:18.833866 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:45:18.833890 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:45:18.842378 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:45:18.847396 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:45:18.867160 ignition[1349]: INFO : Ignition 2.14.0 Feb 9 09:45:18.869074 ignition[1349]: INFO : Stage: files Feb 9 09:45:18.870828 ignition[1349]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:18.873299 ignition[1349]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:18.887535 ignition[1349]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:18.890729 ignition[1349]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:18.893622 ignition[1349]: INFO : PUT result: OK Feb 9 09:45:18.899057 ignition[1349]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:45:18.902563 ignition[1349]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:45:18.902563 ignition[1349]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:45:18.946916 ignition[1349]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:45:18.949841 ignition[1349]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:45:18.955640 unknown[1349]: wrote ssh authorized keys file for user: core Feb 9 09:45:18.957830 ignition[1349]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:45:18.961528 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:45:18.965244 ignition[1349]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 09:45:19.133508 systemd-networkd[1194]: eth0: Gained IPv6LL Feb 9 09:45:19.430534 ignition[1349]: INFO : GET result: OK Feb 9 09:45:19.950914 ignition[1349]: DEBUG : file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 09:45:19.955923 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:45:19.955923 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:45:19.955923 ignition[1349]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:20.004279 ignition[1349]: INFO : GET result: OK Feb 9 09:45:20.124814 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:45:20.128620 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:45:20.132377 ignition[1349]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:20.522549 ignition[1349]: INFO : GET result: OK Feb 9 09:45:20.800407 ignition[1349]: DEBUG : file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 09:45:20.804904 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:45:20.808900 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:45:20.812228 ignition[1349]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:45:21.141322 ignition[1349]: INFO : GET result: OK Feb 9 09:45:27.827172 ignition[1349]: DEBUG : file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 9 09:45:27.832090 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:45:27.832090 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:45:27.832090 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:45:27.832090 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:45:27.832090 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:27.856450 ignition[1349]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem885180596" Feb 9 09:45:27.863841 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1349) Feb 9 09:45:27.863915 ignition[1349]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem885180596": device or resource busy Feb 9 09:45:27.863915 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem885180596", trying btrfs: device or resource busy Feb 9 09:45:27.863915 ignition[1349]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem885180596" Feb 9 09:45:27.863915 ignition[1349]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem885180596" Feb 9 09:45:27.886040 ignition[1349]: INFO : op(3): [started] unmounting "/mnt/oem885180596" Feb 9 09:45:27.886040 ignition[1349]: INFO : op(3): [finished] unmounting "/mnt/oem885180596" Feb 9 09:45:27.886040 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:45:27.886040 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:45:27.886040 ignition[1349]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:45:27.901151 systemd[1]: mnt-oem885180596.mount: Deactivated successfully. Feb 9 09:45:28.112677 ignition[1349]: INFO : GET result: OK Feb 9 09:45:35.492974 ignition[1349]: DEBUG : file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 09:45:35.497778 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:45:35.497778 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:45:35.497778 ignition[1349]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:45:35.668782 ignition[1349]: INFO : GET result: OK Feb 9 09:45:47.461154 ignition[1349]: DEBUG : file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 09:45:47.466126 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:45:47.466126 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:45:47.466126 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:45:47.466126 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:45:47.466126 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:45:47.482426 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:45:47.482426 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:45:47.482426 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:45:47.482426 ignition[1349]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:45:47.879253 ignition[1349]: INFO : GET result: OK Feb 9 09:45:48.001940 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:45:48.005483 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:45:48.009149 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:45:48.012554 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:45:48.015966 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:45:48.019492 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:45:48.023158 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:48.035378 ignition[1349]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2770484074" Feb 9 09:45:48.038270 ignition[1349]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2770484074": device or resource busy Feb 9 09:45:48.038270 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2770484074", trying btrfs: device or resource busy Feb 9 09:45:48.038270 ignition[1349]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2770484074" Feb 9 09:45:48.053340 ignition[1349]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2770484074" Feb 9 09:45:48.060018 ignition[1349]: INFO : op(6): [started] unmounting "/mnt/oem2770484074" Feb 9 09:45:48.062354 ignition[1349]: INFO : op(6): [finished] unmounting "/mnt/oem2770484074" Feb 9 09:45:48.065508 systemd[1]: mnt-oem2770484074.mount: Deactivated successfully. Feb 9 09:45:48.070229 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:45:48.073878 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:45:48.077500 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:48.087450 ignition[1349]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem265474550" Feb 9 09:45:48.090276 ignition[1349]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem265474550": device or resource busy Feb 9 09:45:48.090276 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem265474550", trying btrfs: device or resource busy Feb 9 09:45:48.090276 ignition[1349]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem265474550" Feb 9 09:45:48.111135 ignition[1349]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem265474550" Feb 9 09:45:48.111135 ignition[1349]: INFO : op(9): [started] unmounting "/mnt/oem265474550" Feb 9 09:45:48.111135 ignition[1349]: INFO : op(9): [finished] unmounting "/mnt/oem265474550" Feb 9 09:45:48.111135 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:45:48.111135 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:45:48.111135 ignition[1349]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:45:48.101682 systemd[1]: mnt-oem265474550.mount: Deactivated successfully. Feb 9 09:45:48.144776 ignition[1349]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem462192728" Feb 9 09:45:48.144776 ignition[1349]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem462192728": device or resource busy Feb 9 09:45:48.144776 ignition[1349]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem462192728", trying btrfs: device or resource busy Feb 9 09:45:48.144776 ignition[1349]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem462192728" Feb 9 09:45:48.159710 ignition[1349]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem462192728" Feb 9 09:45:48.159710 ignition[1349]: INFO : op(c): [started] unmounting "/mnt/oem462192728" Feb 9 09:45:48.159710 ignition[1349]: INFO : op(c): [finished] unmounting "/mnt/oem462192728" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(15): [started] processing unit "amazon-ssm-agent.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(15): op(16): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(15): op(16): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(15): [finished] processing unit "amazon-ssm-agent.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(17): [started] processing unit "nvidia.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(17): [finished] processing unit "nvidia.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(18): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(18): op(19): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(18): op(19): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(18): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:45:48.159710 ignition[1349]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1e): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1e): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1f): [started] setting preset to enabled for "nvidia.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(1f): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(20): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(21): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:45:48.220448 ignition[1349]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:45:48.220448 ignition[1349]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:45:48.220448 ignition[1349]: INFO : files: files passed Feb 9 09:45:48.220448 ignition[1349]: INFO : Ignition finished successfully Feb 9 09:45:48.281189 systemd[1]: Finished ignition-files.service. Feb 9 09:45:48.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.292168 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:45:48.303519 kernel: audit: type=1130 audit(1707471948.283:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.302118 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:45:48.307764 systemd[1]: Starting ignition-quench.service... Feb 9 09:45:48.315300 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:45:48.318973 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:45:48.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.335537 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:45:48.362570 kernel: audit: type=1130 audit(1707471948.334:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.362616 kernel: audit: type=1130 audit(1707471948.341:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.362643 kernel: audit: type=1131 audit(1707471948.341:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.335722 systemd[1]: Finished ignition-quench.service. Feb 9 09:45:48.346373 systemd[1]: Reached target ignition-complete.target. Feb 9 09:45:48.364844 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:45:48.393284 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:45:48.395487 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:45:48.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.404668 systemd[1]: Reached target initrd-fs.target. Feb 9 09:45:48.413865 kernel: audit: type=1130 audit(1707471948.397:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.413904 kernel: audit: type=1131 audit(1707471948.403:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.416068 systemd[1]: Reached target initrd.target. Feb 9 09:45:48.419036 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:45:48.420651 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:45:48.443437 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:45:48.455409 kernel: audit: type=1130 audit(1707471948.444:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.446574 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:45:48.474507 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:45:48.478142 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:45:48.481550 systemd[1]: Stopped target timers.target. Feb 9 09:45:48.484690 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:45:48.484892 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:45:48.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.489867 systemd[1]: Stopped target initrd.target. Feb 9 09:45:48.498304 kernel: audit: type=1131 audit(1707471948.488:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.499846 systemd[1]: Stopped target basic.target. Feb 9 09:45:48.502689 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:45:48.506061 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:45:48.509327 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:45:48.512660 systemd[1]: Stopped target remote-fs.target. Feb 9 09:45:48.515561 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:45:48.518574 systemd[1]: Stopped target sysinit.target. Feb 9 09:45:48.520135 systemd[1]: Stopped target local-fs.target. Feb 9 09:45:48.522922 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:45:48.525850 systemd[1]: Stopped target swap.target. Feb 9 09:45:48.528689 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:45:48.530034 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:45:48.535058 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:45:48.545255 kernel: audit: type=1131 audit(1707471948.533:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.545209 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:45:48.555929 kernel: audit: type=1131 audit(1707471948.545:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.545298 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:45:48.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.547053 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:45:48.547136 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:45:48.557846 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:45:48.563289 systemd[1]: Stopped ignition-files.service. Feb 9 09:45:48.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.569019 systemd[1]: Stopping ignition-mount.service... Feb 9 09:45:48.596933 iscsid[1199]: iscsid shutting down. Feb 9 09:45:48.606325 ignition[1387]: INFO : Ignition 2.14.0 Feb 9 09:45:48.606325 ignition[1387]: INFO : Stage: umount Feb 9 09:45:48.606325 ignition[1387]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:45:48.606325 ignition[1387]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:45:48.606325 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:45:48.606325 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:45:48.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.597094 systemd[1]: Stopping iscsid.service... Feb 9 09:45:48.634272 ignition[1387]: INFO : PUT result: OK Feb 9 09:45:48.602891 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:45:48.607005 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:45:48.607152 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:45:48.612675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:45:48.612774 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:45:48.640139 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:45:48.650684 ignition[1387]: INFO : umount: umount passed Feb 9 09:45:48.650684 ignition[1387]: INFO : Ignition finished successfully Feb 9 09:45:48.641975 systemd[1]: Stopped iscsid.service. Feb 9 09:45:48.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.660654 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:45:48.661539 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:45:48.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.664809 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:45:48.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.664972 systemd[1]: Stopped ignition-mount.service. Feb 9 09:45:48.667637 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:45:48.667799 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:45:48.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.669580 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:45:48.669663 systemd[1]: Stopped ignition-disks.service. Feb 9 09:45:48.671394 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:45:48.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.671474 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:45:48.673482 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:45:48.673564 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:45:48.673917 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:45:48.673993 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:45:48.674181 systemd[1]: Stopped target paths.target. Feb 9 09:45:48.674758 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:45:48.680586 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:45:48.682381 systemd[1]: Stopped target slices.target. Feb 9 09:45:48.683861 systemd[1]: Stopped target sockets.target. Feb 9 09:45:48.685477 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:45:48.685549 systemd[1]: Closed iscsid.socket. Feb 9 09:45:48.688626 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:45:48.688716 systemd[1]: Stopped ignition-setup.service. Feb 9 09:45:48.690425 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:45:48.690501 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:45:48.694575 systemd[1]: Stopping iscsiuio.service... Feb 9 09:45:48.700431 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:45:48.700628 systemd[1]: Stopped iscsiuio.service. Feb 9 09:45:48.703631 systemd[1]: Stopped target network.target. Feb 9 09:45:48.705229 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:45:48.705296 systemd[1]: Closed iscsiuio.socket. Feb 9 09:45:48.708824 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:45:48.711872 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:45:48.712403 systemd-networkd[1194]: eth0: DHCPv6 lease lost Feb 9 09:45:48.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.741080 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:45:48.742889 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:45:48.758747 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:45:48.760801 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:45:48.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.762000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:45:48.763000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:45:48.764474 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:45:48.764569 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:45:48.770545 systemd[1]: Stopping network-cleanup.service... Feb 9 09:45:48.775491 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:45:48.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.775620 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:45:48.778903 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:45:48.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.778988 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:45:48.780748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:45:48.780833 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:45:48.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.784658 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:45:48.791096 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:45:48.791312 systemd[1]: Stopped network-cleanup.service. Feb 9 09:45:48.797691 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:45:48.798002 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:45:48.811492 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:45:48.811591 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:45:48.816775 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:45:48.816867 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:45:48.820221 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:45:48.823697 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:45:48.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.826609 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:45:48.826691 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:45:48.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.831243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:45:48.833352 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:45:48.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.837840 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:45:48.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.857270 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:45:48.857449 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:45:48.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.863769 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:45:48.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:48.863969 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:45:48.870010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:45:48.870104 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:45:48.873168 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:45:48.873466 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:45:48.875714 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:45:48.879545 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:45:48.905867 systemd[1]: Switching root. Feb 9 09:45:48.935520 systemd-journald[308]: Journal stopped Feb 9 09:45:53.151096 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Feb 9 09:45:53.151209 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:45:53.151252 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:45:53.151283 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:45:53.151314 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:45:53.151358 kernel: SELinux: policy capability open_perms=1 Feb 9 09:45:53.151394 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:45:53.151429 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:45:53.151458 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:45:53.151489 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:45:53.151528 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:45:53.151557 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:45:53.151588 systemd[1]: Successfully loaded SELinux policy in 66.625ms. Feb 9 09:45:53.151634 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.557ms. Feb 9 09:45:53.151671 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:45:53.151707 systemd[1]: Detected virtualization amazon. Feb 9 09:45:53.151738 systemd[1]: Detected architecture arm64. Feb 9 09:45:53.151771 systemd[1]: Detected first boot. Feb 9 09:45:53.151803 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:45:53.151835 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:45:53.151867 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:45:53.151904 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:45:53.151938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:45:53.151976 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:45:53.152362 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:45:53.152403 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:45:53.152436 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:45:53.152478 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:45:53.152510 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:45:53.152542 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:45:53.152573 systemd[1]: Created slice system-getty.slice. Feb 9 09:45:53.152607 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:45:53.152638 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:45:53.152677 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:45:53.152708 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:45:53.152739 systemd[1]: Created slice user.slice. Feb 9 09:45:53.152770 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:45:53.152802 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:45:53.152831 systemd[1]: Set up automount boot.automount. Feb 9 09:45:53.152863 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:45:53.152900 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:45:53.152931 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:45:53.152968 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:45:53.152997 systemd[1]: Reached target integritysetup.target. Feb 9 09:45:53.153029 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:45:53.153076 systemd[1]: Reached target remote-fs.target. Feb 9 09:45:53.153110 systemd[1]: Reached target slices.target. Feb 9 09:45:53.153139 systemd[1]: Reached target swap.target. Feb 9 09:45:53.153174 systemd[1]: Reached target torcx.target. Feb 9 09:45:53.153204 systemd[1]: Reached target veritysetup.target. Feb 9 09:45:53.153233 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:45:53.153264 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:45:53.153295 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:45:53.153326 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:45:53.153374 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:45:53.153406 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:45:53.153436 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:45:53.153470 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:45:53.153502 systemd[1]: Mounting media.mount... Feb 9 09:45:53.153531 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:45:53.153562 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:45:53.153593 systemd[1]: Mounting tmp.mount... Feb 9 09:45:53.153623 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:45:53.153658 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:45:53.153690 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:45:53.153719 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:45:53.153752 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:45:53.153783 systemd[1]: Starting modprobe@drm.service... Feb 9 09:45:53.153812 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:45:53.153841 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:45:53.153870 systemd[1]: Starting modprobe@loop.service... Feb 9 09:45:53.153900 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:45:53.153939 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:45:53.153968 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:45:53.153998 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:45:53.154032 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:45:53.156014 systemd[1]: Stopped systemd-journald.service. Feb 9 09:45:53.156052 kernel: loop: module loaded Feb 9 09:45:53.156083 systemd[1]: Starting systemd-journald.service... Feb 9 09:45:53.156113 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:45:53.156143 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:45:53.156172 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:45:53.156203 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:45:53.156235 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:45:53.156267 systemd[1]: Stopped verity-setup.service. Feb 9 09:45:53.156302 kernel: fuse: init (API version 7.34) Feb 9 09:45:53.156331 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:45:53.156391 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:45:53.156426 systemd[1]: Mounted media.mount. Feb 9 09:45:53.156458 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:45:53.156487 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:45:53.159450 systemd[1]: Mounted tmp.mount. Feb 9 09:45:53.159485 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:45:53.159516 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:45:53.159556 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:45:53.159586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:45:53.159619 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:45:53.159661 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:45:53.159698 systemd-journald[1507]: Journal started Feb 9 09:45:53.159805 systemd-journald[1507]: Runtime Journal (/run/log/journal/ec22c5c4f10fb54d0d1ffa2ebae01ad5) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:45:49.097000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:45:49.180000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:45:49.181000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:45:49.181000 audit: BPF prog-id=10 op=LOAD Feb 9 09:45:49.181000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:45:49.181000 audit: BPF prog-id=11 op=LOAD Feb 9 09:45:49.181000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:45:49.285000 audit[1420]: AVC avc: denied { associate } for pid=1420 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:45:49.285000 audit[1420]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b2 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1403 pid=1420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:49.285000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:45:49.289000 audit[1420]: AVC avc: denied { associate } for pid=1420 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:45:49.289000 audit[1420]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1403 pid=1420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:49.289000 audit: CWD cwd="/" Feb 9 09:45:49.289000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:45:49.289000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:45:49.289000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:45:52.786000 audit: BPF prog-id=12 op=LOAD Feb 9 09:45:52.786000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:45:52.786000 audit: BPF prog-id=13 op=LOAD Feb 9 09:45:52.786000 audit: BPF prog-id=14 op=LOAD Feb 9 09:45:52.786000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:45:52.786000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:45:52.788000 audit: BPF prog-id=15 op=LOAD Feb 9 09:45:52.788000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:45:52.788000 audit: BPF prog-id=16 op=LOAD Feb 9 09:45:52.788000 audit: BPF prog-id=17 op=LOAD Feb 9 09:45:52.788000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:45:52.788000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:45:52.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:52.797000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:45:52.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:52.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.054000 audit: BPF prog-id=18 op=LOAD Feb 9 09:45:53.055000 audit: BPF prog-id=19 op=LOAD Feb 9 09:45:53.055000 audit: BPF prog-id=20 op=LOAD Feb 9 09:45:53.055000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:45:53.055000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:45:53.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.162727 systemd[1]: Finished modprobe@drm.service. Feb 9 09:45:53.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.147000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:45:53.147000 audit[1507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffab742c0 a2=4000 a3=1 items=0 ppid=1 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:53.147000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:45:53.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:49.282812 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:45:52.783596 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:45:49.284337 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:45:52.790666 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:45:49.284425 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:45:49.284495 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:45:49.284522 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:45:49.284587 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:45:49.284618 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:45:53.168804 systemd[1]: Started systemd-journald.service. Feb 9 09:45:53.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:49.285086 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:45:49.285177 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:45:49.285216 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:45:49.286073 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:45:53.169635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:45:53.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:49.286157 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:45:53.169912 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:45:49.286202 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:45:49.286242 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:45:53.173411 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:45:49.286287 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:45:53.173693 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:45:49.286325 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:45:52.021286 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:52.021832 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:53.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:52.022085 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:53.176533 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:45:53.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:52.022551 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:45:53.176811 systemd[1]: Finished modprobe@loop.service. Feb 9 09:45:52.022655 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:45:53.179134 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:45:52.022791 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-02-09T09:45:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:45:53.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.181912 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:45:53.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.184736 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:45:53.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.187568 systemd[1]: Reached target network-pre.target. Feb 9 09:45:53.192187 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:45:53.197037 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:45:53.203802 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:45:53.208168 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:45:53.212282 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:45:53.214597 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:45:53.219001 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:45:53.221325 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:45:53.223640 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:45:53.228409 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:45:53.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.233311 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:45:53.235832 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:45:53.240002 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:45:53.261894 systemd-journald[1507]: Time spent on flushing to /var/log/journal/ec22c5c4f10fb54d0d1ffa2ebae01ad5 is 83.200ms for 1169 entries. Feb 9 09:45:53.261894 systemd-journald[1507]: System Journal (/var/log/journal/ec22c5c4f10fb54d0d1ffa2ebae01ad5) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:45:53.365509 systemd-journald[1507]: Received client request to flush runtime journal. Feb 9 09:45:53.365650 kernel: kauditd_printk_skb: 97 callbacks suppressed Feb 9 09:45:53.365696 kernel: audit: type=1130 audit(1707471953.294:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.365740 kernel: audit: type=1130 audit(1707471953.320:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.266924 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:45:53.269121 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:45:53.292846 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:45:53.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.319661 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:45:53.381741 kernel: audit: type=1130 audit(1707471953.368:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.331691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:45:53.367227 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:45:53.403696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:45:53.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.414362 kernel: audit: type=1130 audit(1707471953.404:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.465534 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:45:53.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.477911 kernel: audit: type=1130 audit(1707471953.466:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:53.476162 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:45:53.491232 udevadm[1541]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:45:54.105975 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:45:54.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.116000 audit: BPF prog-id=21 op=LOAD Feb 9 09:45:54.120677 kernel: audit: type=1130 audit(1707471954.108:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.120794 kernel: audit: type=1334 audit(1707471954.116:141): prog-id=21 op=LOAD Feb 9 09:45:54.119106 systemd[1]: Starting systemd-udevd.service... Feb 9 09:45:54.116000 audit: BPF prog-id=22 op=LOAD Feb 9 09:45:54.116000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:45:54.116000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:45:54.121423 kernel: audit: type=1334 audit(1707471954.116:142): prog-id=22 op=LOAD Feb 9 09:45:54.121471 kernel: audit: type=1334 audit(1707471954.116:143): prog-id=7 op=UNLOAD Feb 9 09:45:54.129382 kernel: audit: type=1334 audit(1707471954.116:144): prog-id=8 op=UNLOAD Feb 9 09:45:54.163494 systemd-udevd[1542]: Using default interface naming scheme 'v252'. Feb 9 09:45:54.195573 systemd[1]: Started systemd-udevd.service. Feb 9 09:45:54.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.199000 audit: BPF prog-id=23 op=LOAD Feb 9 09:45:54.202255 systemd[1]: Starting systemd-networkd.service... Feb 9 09:45:54.211000 audit: BPF prog-id=24 op=LOAD Feb 9 09:45:54.211000 audit: BPF prog-id=25 op=LOAD Feb 9 09:45:54.211000 audit: BPF prog-id=26 op=LOAD Feb 9 09:45:54.213926 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:45:54.293474 systemd[1]: Started systemd-userdbd.service. Feb 9 09:45:54.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.328718 (udev-worker)[1545]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:45:54.331687 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 09:45:54.421961 systemd-networkd[1548]: lo: Link UP Feb 9 09:45:54.421984 systemd-networkd[1548]: lo: Gained carrier Feb 9 09:45:54.422910 systemd-networkd[1548]: Enumeration completed Feb 9 09:45:54.423092 systemd[1]: Started systemd-networkd.service. Feb 9 09:45:54.423129 systemd-networkd[1548]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:45:54.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.429215 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:45:54.435496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:45:54.436648 systemd-networkd[1548]: eth0: Link UP Feb 9 09:45:54.436937 systemd-networkd[1548]: eth0: Gained carrier Feb 9 09:45:54.448627 systemd-networkd[1548]: eth0: DHCPv4 address 172.31.20.254/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:45:54.501391 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1577) Feb 9 09:45:54.651511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:45:54.677128 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:45:54.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.681383 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:45:54.700832 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:45:54.737905 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:45:54.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.739961 systemd[1]: Reached target cryptsetup.target. Feb 9 09:45:54.743883 systemd[1]: Starting lvm2-activation.service... Feb 9 09:45:54.752318 lvm[1662]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:45:54.787993 systemd[1]: Finished lvm2-activation.service. Feb 9 09:45:54.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.789978 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:45:54.791753 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:45:54.791820 systemd[1]: Reached target local-fs.target. Feb 9 09:45:54.793444 systemd[1]: Reached target machines.target. Feb 9 09:45:54.797282 systemd[1]: Starting ldconfig.service... Feb 9 09:45:54.799743 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:45:54.799869 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:54.802131 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:45:54.805783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:45:54.811198 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:45:54.813767 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:45:54.813884 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:45:54.817431 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:45:54.827720 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1664 (bootctl) Feb 9 09:45:54.829977 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:45:54.860692 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:45:54.863731 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:45:54.868318 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:45:54.876135 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:45:54.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.961491 systemd-fsck[1673]: fsck.fat 4.2 (2021-01-31) Feb 9 09:45:54.961491 systemd-fsck[1673]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 09:45:54.970466 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:45:54.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:54.975184 systemd[1]: Mounting boot.mount... Feb 9 09:45:54.998655 systemd[1]: Mounted boot.mount. Feb 9 09:45:55.033177 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:45:55.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.195923 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:45:55.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.200237 systemd[1]: Starting audit-rules.service... Feb 9 09:45:55.204906 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:45:55.210865 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:45:55.213000 audit: BPF prog-id=27 op=LOAD Feb 9 09:45:55.220000 audit: BPF prog-id=28 op=LOAD Feb 9 09:45:55.218949 systemd[1]: Starting systemd-resolved.service... Feb 9 09:45:55.225197 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:45:55.231215 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:45:55.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.246627 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:45:55.248821 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:45:55.265000 audit[1693]: SYSTEM_BOOT pid=1693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.270437 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:45:55.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.387327 systemd-resolved[1691]: Positive Trust Anchors: Feb 9 09:45:55.388193 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:45:55.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.390111 systemd[1]: Reached target time-set.target. Feb 9 09:45:55.393145 systemd-resolved[1691]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:45:55.393450 systemd-resolved[1691]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:45:55.403083 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:45:55.404116 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:45:55.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.415287 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:45:55.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:45:55.416000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:45:55.416000 audit[1710]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffea337250 a2=420 a3=0 items=0 ppid=1687 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:45:55.418706 augenrules[1710]: No rules Feb 9 09:45:55.416000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:45:55.421618 systemd[1]: Finished audit-rules.service. Feb 9 09:45:55.428026 systemd-resolved[1691]: Defaulting to hostname 'linux'. Feb 9 09:45:55.431957 systemd[1]: Started systemd-resolved.service. Feb 9 09:45:55.433848 systemd[1]: Reached target network.target. Feb 9 09:45:55.435545 systemd[1]: Reached target nss-lookup.target. Feb 9 09:45:55.473577 systemd-timesyncd[1692]: Contacted time server 159.203.82.102:123 (0.flatcar.pool.ntp.org). Feb 9 09:45:55.473804 systemd-timesyncd[1692]: Initial clock synchronization to Fri 2024-02-09 09:45:55.094470 UTC. Feb 9 09:45:55.479825 ldconfig[1663]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:45:55.485630 systemd[1]: Finished ldconfig.service. Feb 9 09:45:55.489611 systemd[1]: Starting systemd-update-done.service... Feb 9 09:45:55.503155 systemd[1]: Finished systemd-update-done.service. Feb 9 09:45:55.505125 systemd[1]: Reached target sysinit.target. Feb 9 09:45:55.506901 systemd[1]: Started motdgen.path. Feb 9 09:45:55.508653 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:45:55.511216 systemd[1]: Started logrotate.timer. Feb 9 09:45:55.513438 systemd[1]: Started mdadm.timer. Feb 9 09:45:55.514900 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:45:55.516742 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:45:55.516809 systemd[1]: Reached target paths.target. Feb 9 09:45:55.518410 systemd[1]: Reached target timers.target. Feb 9 09:45:55.520476 systemd[1]: Listening on dbus.socket. Feb 9 09:45:55.524198 systemd[1]: Starting docker.socket... Feb 9 09:45:55.533615 systemd[1]: Listening on sshd.socket. Feb 9 09:45:55.535268 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:55.536080 systemd[1]: Listening on docker.socket. Feb 9 09:45:55.537734 systemd[1]: Reached target sockets.target. Feb 9 09:45:55.539271 systemd[1]: Reached target basic.target. Feb 9 09:45:55.540818 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:45:55.540868 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:45:55.542831 systemd[1]: Starting containerd.service... Feb 9 09:45:55.547697 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:45:55.551795 systemd[1]: Starting dbus.service... Feb 9 09:45:55.557273 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:45:55.561650 systemd[1]: Starting extend-filesystems.service... Feb 9 09:45:55.564076 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:45:55.567645 systemd[1]: Starting motdgen.service... Feb 9 09:45:55.572669 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:45:55.576619 systemd[1]: Starting prepare-critools.service... Feb 9 09:45:55.584199 systemd[1]: Starting prepare-helm.service... Feb 9 09:45:55.595333 jq[1721]: false Feb 9 09:45:55.588235 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:45:55.595626 systemd[1]: Starting sshd-keygen.service... Feb 9 09:45:55.603706 systemd[1]: Starting systemd-logind.service... Feb 9 09:45:55.606589 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:45:55.606728 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:45:55.607704 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:45:55.609254 systemd[1]: Starting update-engine.service... Feb 9 09:45:55.615583 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:45:55.623083 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:45:55.623456 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:45:55.647132 jq[1739]: true Feb 9 09:45:55.656998 tar[1746]: ./ Feb 9 09:45:55.656998 tar[1746]: ./loopback Feb 9 09:45:55.679113 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:45:55.679499 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:45:55.690232 tar[1741]: linux-arm64/helm Feb 9 09:45:55.694140 tar[1745]: crictl Feb 9 09:45:55.708904 jq[1752]: true Feb 9 09:45:55.710064 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:45:55.710466 systemd[1]: Finished motdgen.service. Feb 9 09:45:55.739912 extend-filesystems[1722]: Found nvme0n1 Feb 9 09:45:55.743115 extend-filesystems[1722]: Found nvme0n1p1 Feb 9 09:45:55.746075 extend-filesystems[1722]: Found nvme0n1p2 Feb 9 09:45:55.748714 extend-filesystems[1722]: Found nvme0n1p3 Feb 9 09:45:55.750965 extend-filesystems[1722]: Found usr Feb 9 09:45:55.754465 dbus-daemon[1720]: [system] SELinux support is enabled Feb 9 09:45:55.754728 systemd[1]: Started dbus.service. Feb 9 09:45:55.759469 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:45:55.759533 systemd[1]: Reached target system-config.target. Feb 9 09:45:55.761429 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:45:55.761465 systemd[1]: Reached target user-config.target. Feb 9 09:45:55.763416 extend-filesystems[1722]: Found nvme0n1p4 Feb 9 09:45:55.774508 extend-filesystems[1722]: Found nvme0n1p6 Feb 9 09:45:55.776901 extend-filesystems[1722]: Found nvme0n1p7 Feb 9 09:45:55.778785 extend-filesystems[1722]: Found nvme0n1p9 Feb 9 09:45:55.780699 dbus-daemon[1720]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1548 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 09:45:55.781219 extend-filesystems[1722]: Checking size of /dev/nvme0n1p9 Feb 9 09:45:55.794019 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:45:55.799855 systemd[1]: Starting systemd-hostnamed.service... Feb 9 09:45:55.824699 update_engine[1735]: I0209 09:45:55.824324 1735 main.cc:92] Flatcar Update Engine starting Feb 9 09:45:55.830288 systemd[1]: Started update-engine.service. Feb 9 09:45:55.832211 update_engine[1735]: I0209 09:45:55.832173 1735 update_check_scheduler.cc:74] Next update check in 6m49s Feb 9 09:45:55.834948 systemd[1]: Started locksmithd.service. Feb 9 09:45:55.881164 extend-filesystems[1722]: Resized partition /dev/nvme0n1p9 Feb 9 09:45:55.887486 extend-filesystems[1780]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:45:55.896429 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 09:45:55.933550 systemd-networkd[1548]: eth0: Gained IPv6LL Feb 9 09:45:55.938339 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:45:55.940637 systemd[1]: Reached target network-online.target. Feb 9 09:45:55.944579 systemd[1]: Started amazon-ssm-agent.service. Feb 9 09:45:55.948778 systemd[1]: Started nvidia.service. Feb 9 09:45:56.011599 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 09:45:56.057464 extend-filesystems[1780]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 09:45:56.057464 extend-filesystems[1780]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:45:56.057464 extend-filesystems[1780]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 09:45:56.100304 extend-filesystems[1722]: Resized filesystem in /dev/nvme0n1p9 Feb 9 09:45:56.103103 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:45:56.103475 systemd[1]: Finished extend-filesystems.service. Feb 9 09:45:56.112025 bash[1781]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:45:56.113682 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:45:56.174395 systemd-logind[1732]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:45:56.178884 systemd-logind[1732]: New seat seat0. Feb 9 09:45:56.188078 tar[1746]: ./bandwidth Feb 9 09:45:56.195851 systemd[1]: Started systemd-logind.service. Feb 9 09:45:56.244085 env[1749]: time="2024-02-09T09:45:56.243993470Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:45:56.398777 amazon-ssm-agent[1789]: 2024/02/09 09:45:56 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 09:45:56.419971 amazon-ssm-agent[1789]: Initializing new seelog logger Feb 9 09:45:56.420181 amazon-ssm-agent[1789]: New Seelog Logger Creation Complete Feb 9 09:45:56.420301 amazon-ssm-agent[1789]: 2024/02/09 09:45:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:45:56.420301 amazon-ssm-agent[1789]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:45:56.420660 amazon-ssm-agent[1789]: 2024/02/09 09:45:56 processing appconfig overrides Feb 9 09:45:56.492439 env[1749]: time="2024-02-09T09:45:56.492384339Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:45:56.494668 env[1749]: time="2024-02-09T09:45:56.494619721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:56.497695 env[1749]: time="2024-02-09T09:45:56.497610789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:56.500770 env[1749]: time="2024-02-09T09:45:56.500694189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:56.502640 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:45:56.503290 env[1749]: time="2024-02-09T09:45:56.503241128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:56.503734 env[1749]: time="2024-02-09T09:45:56.503685806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:56.504483 env[1749]: time="2024-02-09T09:45:56.504432815Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:45:56.505320 env[1749]: time="2024-02-09T09:45:56.505283155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:56.506527 env[1749]: time="2024-02-09T09:45:56.506488709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:56.512300 env[1749]: time="2024-02-09T09:45:56.512246808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:45:56.512893 env[1749]: time="2024-02-09T09:45:56.512854158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:45:56.515715 env[1749]: time="2024-02-09T09:45:56.515653963Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:45:56.516063 env[1749]: time="2024-02-09T09:45:56.516031938Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:45:56.519481 env[1749]: time="2024-02-09T09:45:56.519409908Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:45:56.527547 env[1749]: time="2024-02-09T09:45:56.527492761Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:45:56.527810 env[1749]: time="2024-02-09T09:45:56.527777293Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:45:56.528057 env[1749]: time="2024-02-09T09:45:56.528027874Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:45:56.528380 env[1749]: time="2024-02-09T09:45:56.528300586Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.528512 env[1749]: time="2024-02-09T09:45:56.528482566Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.528632 env[1749]: time="2024-02-09T09:45:56.528603833Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.528774 env[1749]: time="2024-02-09T09:45:56.528745847Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.529458 env[1749]: time="2024-02-09T09:45:56.529409406Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.529669 env[1749]: time="2024-02-09T09:45:56.529629007Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.529804 env[1749]: time="2024-02-09T09:45:56.529774097Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.529933 env[1749]: time="2024-02-09T09:45:56.529902909Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.530054 env[1749]: time="2024-02-09T09:45:56.530025867Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:45:56.530670 env[1749]: time="2024-02-09T09:45:56.530630930Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:45:56.531112 env[1749]: time="2024-02-09T09:45:56.531082193Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:45:56.531925 env[1749]: time="2024-02-09T09:45:56.531868687Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:45:56.532039 env[1749]: time="2024-02-09T09:45:56.531941747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532039 env[1749]: time="2024-02-09T09:45:56.531975058Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:45:56.532138 env[1749]: time="2024-02-09T09:45:56.532076594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532138 env[1749]: time="2024-02-09T09:45:56.532106933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532244 env[1749]: time="2024-02-09T09:45:56.532136701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532244 env[1749]: time="2024-02-09T09:45:56.532166972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532244 env[1749]: time="2024-02-09T09:45:56.532197049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532244 env[1749]: time="2024-02-09T09:45:56.532227365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532465 env[1749]: time="2024-02-09T09:45:56.532254687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532465 env[1749]: time="2024-02-09T09:45:56.532282100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532465 env[1749]: time="2024-02-09T09:45:56.532325894Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:45:56.532652 env[1749]: time="2024-02-09T09:45:56.532610895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532713 env[1749]: time="2024-02-09T09:45:56.532655822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532713 env[1749]: time="2024-02-09T09:45:56.532687144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.532818 env[1749]: time="2024-02-09T09:45:56.532716798Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:45:56.532818 env[1749]: time="2024-02-09T09:45:56.532750304Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:45:56.532818 env[1749]: time="2024-02-09T09:45:56.532778391Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:45:56.532983 env[1749]: time="2024-02-09T09:45:56.532814504Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:45:56.532983 env[1749]: time="2024-02-09T09:45:56.532875891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:45:56.533290 env[1749]: time="2024-02-09T09:45:56.533191735Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:45:56.534323 env[1749]: time="2024-02-09T09:45:56.533303388Z" level=info msg="Connect containerd service" Feb 9 09:45:56.534323 env[1749]: time="2024-02-09T09:45:56.533395241Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:45:56.534571 env[1749]: time="2024-02-09T09:45:56.534502860Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:45:56.534984 env[1749]: time="2024-02-09T09:45:56.534936564Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:45:56.535071 env[1749]: time="2024-02-09T09:45:56.535038465Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:45:56.535231 systemd[1]: Started containerd.service. Feb 9 09:45:56.537862 env[1749]: time="2024-02-09T09:45:56.537800329Z" level=info msg="containerd successfully booted in 0.487197s" Feb 9 09:45:56.556488 env[1749]: time="2024-02-09T09:45:56.556410957Z" level=info msg="Start subscribing containerd event" Feb 9 09:45:56.556638 env[1749]: time="2024-02-09T09:45:56.556502170Z" level=info msg="Start recovering state" Feb 9 09:45:56.556638 env[1749]: time="2024-02-09T09:45:56.556614988Z" level=info msg="Start event monitor" Feb 9 09:45:56.556741 env[1749]: time="2024-02-09T09:45:56.556651432Z" level=info msg="Start snapshots syncer" Feb 9 09:45:56.556741 env[1749]: time="2024-02-09T09:45:56.556675119Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:45:56.556741 env[1749]: time="2024-02-09T09:45:56.556693615Z" level=info msg="Start streaming server" Feb 9 09:45:56.572218 tar[1746]: ./ptp Feb 9 09:45:56.574141 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 09:45:56.574397 systemd[1]: Started systemd-hostnamed.service. Feb 9 09:45:56.577651 dbus-daemon[1720]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1764 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 09:45:56.581997 systemd[1]: Starting polkit.service... Feb 9 09:45:56.603272 systemd[1]: Created slice system-sshd.slice. Feb 9 09:45:56.639100 polkitd[1859]: Started polkitd version 121 Feb 9 09:45:56.681832 polkitd[1859]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 09:45:56.683757 polkitd[1859]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 09:45:56.689718 polkitd[1859]: Finished loading, compiling and executing 2 rules Feb 9 09:45:56.696504 dbus-daemon[1720]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 09:45:56.696768 systemd[1]: Started polkit.service. Feb 9 09:45:56.699255 polkitd[1859]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 09:45:56.754070 systemd-hostnamed[1764]: Hostname set to (transient) Feb 9 09:45:56.754227 systemd-resolved[1691]: System hostname changed to 'ip-172-31-20-254'. Feb 9 09:45:56.820976 tar[1746]: ./vlan Feb 9 09:45:56.858116 coreos-metadata[1719]: Feb 09 09:45:56.857 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 09:45:56.860243 coreos-metadata[1719]: Feb 09 09:45:56.858 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 09:45:56.860682 coreos-metadata[1719]: Feb 09 09:45:56.860 INFO Fetch successful Feb 9 09:45:56.861617 coreos-metadata[1719]: Feb 09 09:45:56.860 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 09:45:56.863486 coreos-metadata[1719]: Feb 09 09:45:56.861 INFO Fetch successful Feb 9 09:45:56.872058 unknown[1719]: wrote ssh authorized keys file for user: core Feb 9 09:45:56.890865 update-ssh-keys[1898]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:45:56.891858 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:45:57.060967 tar[1746]: ./host-device Feb 9 09:45:57.244948 tar[1746]: ./tuning Feb 9 09:45:57.305412 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Create new startup processor Feb 9 09:45:57.312207 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 09:45:57.312207 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing bookkeeping folders Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO removing the completed state files Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing bookkeeping folders for long running plugins Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing healthcheck folders for long running plugins Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing locations for inventory plugin Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing default location for custom inventory Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing default location for file inventory Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Initializing default location for role inventory Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Init the cloudwatchlogs publisher Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:configureDocker Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:runDocument Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:downloadContent Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform independent plugin aws:configurePackage Feb 9 09:45:57.313003 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 09:45:57.314111 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 09:45:57.314178 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO OS: linux, Arch: arm64 Feb 9 09:45:57.316153 amazon-ssm-agent[1789]: datastore file /var/lib/amazon/ssm/i-0745313eede253edb/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 09:45:57.389660 tar[1746]: ./vrf Feb 9 09:45:57.404690 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 09:45:57.499404 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 09:45:57.503607 tar[1746]: ./sbr Feb 9 09:45:57.593800 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 09:45:57.610619 tar[1746]: ./tap Feb 9 09:45:57.688209 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] Starting message polling Feb 9 09:45:57.734480 tar[1746]: ./dhcp Feb 9 09:45:57.782927 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 09:45:57.851018 tar[1741]: linux-arm64/LICENSE Feb 9 09:45:57.854865 tar[1741]: linux-arm64/README.md Feb 9 09:45:57.866950 systemd[1]: Finished prepare-helm.service. Feb 9 09:45:57.877831 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [instanceID=i-0745313eede253edb] Starting association polling Feb 9 09:45:57.958649 tar[1746]: ./static Feb 9 09:45:57.972951 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 09:45:58.032004 tar[1746]: ./firewall Feb 9 09:45:58.039559 systemd[1]: Finished prepare-critools.service. Feb 9 09:45:58.068233 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 09:45:58.103820 tar[1746]: ./macvlan Feb 9 09:45:58.163745 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 09:45:58.165282 tar[1746]: ./dummy Feb 9 09:45:58.225660 tar[1746]: ./bridge Feb 9 09:45:58.259429 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 09:45:58.291643 tar[1746]: ./ipvlan Feb 9 09:45:58.351602 tar[1746]: ./portmap Feb 9 09:45:58.355392 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 09:45:58.409265 tar[1746]: ./host-local Feb 9 09:45:58.451368 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 09:45:58.481944 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:45:58.547677 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 09:45:58.557733 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:45:58.644155 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 09:45:58.740797 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0745313eede253edb, requestId: 9bdcb919-4efc-4823-8478-0a1608a66a6a Feb 9 09:45:58.837569 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [OfflineService] Starting document processing engine... Feb 9 09:45:58.934724 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [OfflineService] [EngineProcessor] Starting Feb 9 09:45:59.032023 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 09:45:59.129388 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [OfflineService] Starting message polling Feb 9 09:45:59.227139 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [OfflineService] Starting send replies to MDS Feb 9 09:45:59.324919 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 09:45:59.422890 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 09:45:59.521213 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:45:59.620119 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 09:45:59.718696 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] listening reply. Feb 9 09:45:59.817598 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [StartupProcessor] Executing startup processor tasks Feb 9 09:45:59.916664 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 09:46:00.015839 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 09:46:00.115218 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 09:46:00.156671 sshd_keygen[1754]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:46:00.191805 systemd[1]: Finished sshd-keygen.service. Feb 9 09:46:00.196230 systemd[1]: Starting issuegen.service... Feb 9 09:46:00.199990 systemd[1]: Started sshd@0-172.31.20.254:22-139.178.89.65:52268.service. Feb 9 09:46:00.212903 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:46:00.213253 systemd[1]: Finished issuegen.service. Feb 9 09:46:00.215221 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0745313eede253edb?role=subscribe&stream=input Feb 9 09:46:00.218250 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:46:00.233556 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:46:00.237981 systemd[1]: Started getty@tty1.service. Feb 9 09:46:00.243155 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 09:46:00.245289 systemd[1]: Reached target getty.target. Feb 9 09:46:00.247156 systemd[1]: Reached target multi-user.target. Feb 9 09:46:00.251537 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:46:00.267796 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:46:00.268158 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:46:00.270295 systemd[1]: Startup finished in 1.145s (kernel) + 35.435s (initrd) + 11.243s (userspace) = 47.823s. Feb 9 09:46:00.314830 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0745313eede253edb?role=subscribe&stream=input Feb 9 09:46:00.404277 sshd[1930]: Accepted publickey for core from 139.178.89.65 port 52268 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:00.409691 sshd[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:00.414872 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 09:46:00.430693 systemd[1]: Created slice user-500.slice. Feb 9 09:46:00.432948 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:46:00.440460 systemd-logind[1732]: New session 1 of user core. Feb 9 09:46:00.452522 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:46:00.456705 systemd[1]: Starting user@500.service... Feb 9 09:46:00.463848 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:00.515037 amazon-ssm-agent[1789]: 2024-02-09 09:45:57 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 09:46:00.640405 systemd[1939]: Queued start job for default target default.target. Feb 9 09:46:00.641867 systemd[1939]: Reached target paths.target. Feb 9 09:46:00.642087 systemd[1939]: Reached target sockets.target. Feb 9 09:46:00.642239 systemd[1939]: Reached target timers.target. Feb 9 09:46:00.642601 systemd[1939]: Reached target basic.target. Feb 9 09:46:00.642851 systemd[1939]: Reached target default.target. Feb 9 09:46:00.642938 systemd[1]: Started user@500.service. Feb 9 09:46:00.643127 systemd[1939]: Startup finished in 168ms. Feb 9 09:46:00.647604 systemd[1]: Started session-1.scope. Feb 9 09:46:00.798804 systemd[1]: Started sshd@1-172.31.20.254:22-139.178.89.65:35610.service. Feb 9 09:46:00.961081 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 35610 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:00.963525 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:00.972326 systemd[1]: Started session-2.scope. Feb 9 09:46:00.973248 systemd-logind[1732]: New session 2 of user core. Feb 9 09:46:01.103937 sshd[1948]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:01.109473 systemd-logind[1732]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:46:01.110269 systemd[1]: sshd@1-172.31.20.254:22-139.178.89.65:35610.service: Deactivated successfully. Feb 9 09:46:01.111661 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:46:01.112911 systemd-logind[1732]: Removed session 2. Feb 9 09:46:01.130434 systemd[1]: Started sshd@2-172.31.20.254:22-139.178.89.65:35626.service. Feb 9 09:46:01.303031 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 35626 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:01.306018 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:01.313438 systemd-logind[1732]: New session 3 of user core. Feb 9 09:46:01.314541 systemd[1]: Started session-3.scope. Feb 9 09:46:01.439960 sshd[1954]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:01.445010 systemd-logind[1732]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:46:01.445494 systemd[1]: sshd@2-172.31.20.254:22-139.178.89.65:35626.service: Deactivated successfully. Feb 9 09:46:01.446701 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:46:01.448572 systemd-logind[1732]: Removed session 3. Feb 9 09:46:01.469206 systemd[1]: Started sshd@3-172.31.20.254:22-139.178.89.65:35632.service. Feb 9 09:46:01.640840 sshd[1960]: Accepted publickey for core from 139.178.89.65 port 35632 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:01.643253 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:01.651438 systemd-logind[1732]: New session 4 of user core. Feb 9 09:46:01.651954 systemd[1]: Started session-4.scope. Feb 9 09:46:01.781237 sshd[1960]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:01.786210 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:46:01.787252 systemd[1]: sshd@3-172.31.20.254:22-139.178.89.65:35632.service: Deactivated successfully. Feb 9 09:46:01.788771 systemd-logind[1732]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:46:01.790287 systemd-logind[1732]: Removed session 4. Feb 9 09:46:01.808118 systemd[1]: Started sshd@4-172.31.20.254:22-139.178.89.65:35646.service. Feb 9 09:46:01.975201 sshd[1966]: Accepted publickey for core from 139.178.89.65 port 35646 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:46:01.978298 sshd[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:46:01.985620 systemd-logind[1732]: New session 5 of user core. Feb 9 09:46:01.986509 systemd[1]: Started session-5.scope. Feb 9 09:46:02.107968 sudo[1969]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:46:02.109102 sudo[1969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:46:02.779807 systemd[1]: Starting docker.service... Feb 9 09:46:02.859901 env[1984]: time="2024-02-09T09:46:02.859810775Z" level=info msg="Starting up" Feb 9 09:46:02.862195 env[1984]: time="2024-02-09T09:46:02.862150196Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:46:02.862407 env[1984]: time="2024-02-09T09:46:02.862358726Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:46:02.862533 env[1984]: time="2024-02-09T09:46:02.862501709Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:46:02.862636 env[1984]: time="2024-02-09T09:46:02.862610114Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:46:02.865894 env[1984]: time="2024-02-09T09:46:02.865830533Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:46:02.865894 env[1984]: time="2024-02-09T09:46:02.865875469Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:46:02.866203 env[1984]: time="2024-02-09T09:46:02.865920863Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:46:02.866203 env[1984]: time="2024-02-09T09:46:02.865943977Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:46:02.876596 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport973827879-merged.mount: Deactivated successfully. Feb 9 09:46:03.098269 env[1984]: time="2024-02-09T09:46:03.097527347Z" level=info msg="Loading containers: start." Feb 9 09:46:03.267787 kernel: Initializing XFRM netlink socket Feb 9 09:46:03.310966 env[1984]: time="2024-02-09T09:46:03.310921308Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:46:03.313079 (udev-worker)[1995]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:46:03.414318 systemd-networkd[1548]: docker0: Link UP Feb 9 09:46:03.447883 env[1984]: time="2024-02-09T09:46:03.447817754Z" level=info msg="Loading containers: done." Feb 9 09:46:03.466959 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1088496815-merged.mount: Deactivated successfully. Feb 9 09:46:03.479992 env[1984]: time="2024-02-09T09:46:03.479936661Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:46:03.480560 env[1984]: time="2024-02-09T09:46:03.480529658Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:46:03.480848 env[1984]: time="2024-02-09T09:46:03.480821428Z" level=info msg="Daemon has completed initialization" Feb 9 09:46:03.505386 systemd[1]: Started docker.service. Feb 9 09:46:03.519043 env[1984]: time="2024-02-09T09:46:03.518945157Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:46:03.553278 systemd[1]: Reloading. Feb 9 09:46:03.678482 /usr/lib/systemd/system-generators/torcx-generator[2120]: time="2024-02-09T09:46:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:03.686780 /usr/lib/systemd/system-generators/torcx-generator[2120]: time="2024-02-09T09:46:03Z" level=info msg="torcx already run" Feb 9 09:46:03.853244 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:03.853783 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:03.896718 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:04.090006 systemd[1]: Started kubelet.service. Feb 9 09:46:04.225800 kubelet[2175]: E0209 09:46:04.225700 2175 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:46:04.229852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:46:04.230151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:46:04.776260 env[1749]: time="2024-02-09T09:46:04.776198515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 09:46:05.394853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290223636.mount: Deactivated successfully. Feb 9 09:46:08.633890 env[1749]: time="2024-02-09T09:46:08.633811152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:08.636950 env[1749]: time="2024-02-09T09:46:08.636889084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:08.640186 env[1749]: time="2024-02-09T09:46:08.640133937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:08.643212 env[1749]: time="2024-02-09T09:46:08.643165706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:08.644768 env[1749]: time="2024-02-09T09:46:08.644723279Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 9 09:46:08.660641 env[1749]: time="2024-02-09T09:46:08.660591131Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 09:46:11.783535 env[1749]: time="2024-02-09T09:46:11.783460802Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:11.787569 env[1749]: time="2024-02-09T09:46:11.787505938Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:11.792561 env[1749]: time="2024-02-09T09:46:11.792495316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:11.797019 env[1749]: time="2024-02-09T09:46:11.796958221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:11.798715 env[1749]: time="2024-02-09T09:46:11.798669346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 9 09:46:11.819203 env[1749]: time="2024-02-09T09:46:11.819154942Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 09:46:12.627279 amazon-ssm-agent[1789]: 2024-02-09 09:46:12 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 09:46:13.597194 env[1749]: time="2024-02-09T09:46:13.597125935Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.600088 env[1749]: time="2024-02-09T09:46:13.600028395Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.603362 env[1749]: time="2024-02-09T09:46:13.603291075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.606693 env[1749]: time="2024-02-09T09:46:13.606632125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:13.608418 env[1749]: time="2024-02-09T09:46:13.608324587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 9 09:46:13.625183 env[1749]: time="2024-02-09T09:46:13.625107148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 09:46:14.346103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:46:14.346405 systemd[1]: Stopped kubelet.service. Feb 9 09:46:14.350122 systemd[1]: Started kubelet.service. Feb 9 09:46:14.461864 kubelet[2209]: E0209 09:46:14.461755 2209 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:46:14.469947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:46:14.470256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:46:15.061069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586357489.mount: Deactivated successfully. Feb 9 09:46:15.802741 env[1749]: time="2024-02-09T09:46:15.802685039Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:15.806978 env[1749]: time="2024-02-09T09:46:15.806928314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:15.809169 env[1749]: time="2024-02-09T09:46:15.809125376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:15.812477 env[1749]: time="2024-02-09T09:46:15.812430957Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:15.813138 env[1749]: time="2024-02-09T09:46:15.813099579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 09:46:15.830016 env[1749]: time="2024-02-09T09:46:15.829958603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:46:16.378166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446144106.mount: Deactivated successfully. Feb 9 09:46:16.387961 env[1749]: time="2024-02-09T09:46:16.387908038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:16.391511 env[1749]: time="2024-02-09T09:46:16.391451372Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:16.395022 env[1749]: time="2024-02-09T09:46:16.394962592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:16.398135 env[1749]: time="2024-02-09T09:46:16.398085772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:16.398657 env[1749]: time="2024-02-09T09:46:16.398614122Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:46:16.415242 env[1749]: time="2024-02-09T09:46:16.415187426Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 09:46:17.481905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146471629.mount: Deactivated successfully. Feb 9 09:46:20.778611 env[1749]: time="2024-02-09T09:46:20.778330238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.790617 env[1749]: time="2024-02-09T09:46:20.790557760Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.794519 env[1749]: time="2024-02-09T09:46:20.794455009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.797393 env[1749]: time="2024-02-09T09:46:20.797313929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:20.798684 env[1749]: time="2024-02-09T09:46:20.798638111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 9 09:46:20.814928 env[1749]: time="2024-02-09T09:46:20.814868519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 09:46:21.343367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142133177.mount: Deactivated successfully. Feb 9 09:46:22.557833 env[1749]: time="2024-02-09T09:46:22.557762376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:22.562501 env[1749]: time="2024-02-09T09:46:22.562436845Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:22.566588 env[1749]: time="2024-02-09T09:46:22.566536329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:22.570600 env[1749]: time="2024-02-09T09:46:22.570534969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:22.571958 env[1749]: time="2024-02-09T09:46:22.571884373Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 09:46:24.626675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:46:24.627050 systemd[1]: Stopped kubelet.service. Feb 9 09:46:24.630123 systemd[1]: Started kubelet.service. Feb 9 09:46:24.736425 kubelet[2295]: E0209 09:46:24.736302 2295 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 09:46:24.740020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:46:24.740382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:46:26.766154 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 09:46:28.264732 systemd[1]: Stopped kubelet.service. Feb 9 09:46:28.295145 systemd[1]: Reloading. Feb 9 09:46:28.415775 /usr/lib/systemd/system-generators/torcx-generator[2327]: time="2024-02-09T09:46:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:28.415840 /usr/lib/systemd/system-generators/torcx-generator[2327]: time="2024-02-09T09:46:28Z" level=info msg="torcx already run" Feb 9 09:46:28.592462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:28.593154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:28.635945 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:28.851509 systemd[1]: Started kubelet.service. Feb 9 09:46:28.955907 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:28.955907 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:28.956522 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:28.956522 kubelet[2382]: I0209 09:46:28.956041 2382 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:46:29.945696 kubelet[2382]: I0209 09:46:29.945653 2382 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 09:46:29.945948 kubelet[2382]: I0209 09:46:29.945908 2382 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:46:29.946661 kubelet[2382]: I0209 09:46:29.946599 2382 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 09:46:29.955910 kubelet[2382]: E0209 09:46:29.955864 2382 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:29.956235 kubelet[2382]: I0209 09:46:29.956198 2382 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:46:29.960072 kubelet[2382]: W0209 09:46:29.959992 2382 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:46:29.961956 kubelet[2382]: I0209 09:46:29.961883 2382 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:46:29.962574 kubelet[2382]: I0209 09:46:29.962498 2382 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:46:29.962727 kubelet[2382]: I0209 09:46:29.962673 2382 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:46:29.962727 kubelet[2382]: I0209 09:46:29.962720 2382 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:46:29.962968 kubelet[2382]: I0209 09:46:29.962747 2382 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 09:46:29.962968 kubelet[2382]: I0209 09:46:29.962938 2382 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:29.969285 kubelet[2382]: I0209 09:46:29.969241 2382 kubelet.go:405] "Attempting to sync node with API server" Feb 9 09:46:29.969285 kubelet[2382]: I0209 09:46:29.969291 2382 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:46:29.969540 kubelet[2382]: I0209 09:46:29.969359 2382 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:46:29.969540 kubelet[2382]: I0209 09:46:29.969388 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:46:29.971611 kubelet[2382]: I0209 09:46:29.971551 2382 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:46:29.972135 kubelet[2382]: W0209 09:46:29.972081 2382 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:46:29.973234 kubelet[2382]: I0209 09:46:29.973161 2382 server.go:1168] "Started kubelet" Feb 9 09:46:29.973517 kubelet[2382]: W0209 09:46:29.973431 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.20.254:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:29.973650 kubelet[2382]: E0209 09:46:29.973532 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.254:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:29.973736 kubelet[2382]: W0209 09:46:29.973674 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.20.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-254&limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:29.973808 kubelet[2382]: E0209 09:46:29.973735 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-254&limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:29.979545 kubelet[2382]: I0209 09:46:29.979503 2382 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:46:29.980272 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:46:29.981158 kubelet[2382]: I0209 09:46:29.981125 2382 server.go:461] "Adding debug handlers to kubelet server" Feb 9 09:46:29.981598 kubelet[2382]: E0209 09:46:29.981433 2382 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-20-254.17b228ba7246ec8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-20-254", UID:"ip-172-31-20-254", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-20-254"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 46, 29, 973118091, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 46, 29, 973118091, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.20.254:6443/api/v1/namespaces/default/events": dial tcp 172.31.20.254:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:46:29.983911 kubelet[2382]: I0209 09:46:29.981259 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:46:29.987537 kubelet[2382]: E0209 09:46:29.987478 2382 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:46:29.987537 kubelet[2382]: E0209 09:46:29.987543 2382 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:46:29.990670 kubelet[2382]: I0209 09:46:29.990626 2382 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:46:29.991045 kubelet[2382]: I0209 09:46:29.991003 2382 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 09:46:29.991207 kubelet[2382]: I0209 09:46:29.991172 2382 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 09:46:29.991849 kubelet[2382]: W0209 09:46:29.991765 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.20.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:29.992003 kubelet[2382]: E0209 09:46:29.991852 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:29.992333 kubelet[2382]: E0209 09:46:29.992287 2382 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-254?timeout=10s\": dial tcp 172.31.20.254:6443: connect: connection refused" interval="200ms" Feb 9 09:46:30.043133 kubelet[2382]: I0209 09:46:30.043093 2382 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:46:30.049537 kubelet[2382]: I0209 09:46:30.049502 2382 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:46:30.049784 kubelet[2382]: I0209 09:46:30.049761 2382 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 09:46:30.049933 kubelet[2382]: I0209 09:46:30.049912 2382 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 09:46:30.050174 kubelet[2382]: E0209 09:46:30.050152 2382 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:46:30.052684 kubelet[2382]: W0209 09:46:30.052614 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.20.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:30.052928 kubelet[2382]: E0209 09:46:30.052905 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:30.059323 kubelet[2382]: I0209 09:46:30.059284 2382 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:46:30.059617 kubelet[2382]: I0209 09:46:30.059589 2382 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:46:30.059803 kubelet[2382]: I0209 09:46:30.059779 2382 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:30.062861 kubelet[2382]: I0209 09:46:30.062820 2382 policy_none.go:49] "None policy: Start" Feb 9 09:46:30.064688 kubelet[2382]: I0209 09:46:30.064652 2382 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:46:30.065206 kubelet[2382]: I0209 09:46:30.065174 2382 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:46:30.075937 systemd[1]: Created slice kubepods.slice. Feb 9 09:46:30.087191 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:46:30.093095 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:46:30.096657 kubelet[2382]: I0209 09:46:30.096621 2382 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-254" Feb 9 09:46:30.098423 kubelet[2382]: E0209 09:46:30.098293 2382 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.254:6443/api/v1/nodes\": dial tcp 172.31.20.254:6443: connect: connection refused" node="ip-172-31-20-254" Feb 9 09:46:30.101921 kubelet[2382]: I0209 09:46:30.101871 2382 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:46:30.102319 kubelet[2382]: I0209 09:46:30.102286 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:46:30.107724 kubelet[2382]: E0209 09:46:30.107672 2382 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-254\" not found" Feb 9 09:46:30.150725 kubelet[2382]: I0209 09:46:30.150666 2382 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:30.152909 kubelet[2382]: I0209 09:46:30.152866 2382 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:30.156013 kubelet[2382]: I0209 09:46:30.155952 2382 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:30.166679 systemd[1]: Created slice kubepods-burstable-pod5fdab915e7f3f5411ab8c4b29ba5a471.slice. Feb 9 09:46:30.182074 systemd[1]: Created slice kubepods-burstable-podb209577a7f1bcb3f74c8952110fcc01b.slice. Feb 9 09:46:30.194109 kubelet[2382]: I0209 09:46:30.194029 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:30.194310 kubelet[2382]: I0209 09:46:30.194121 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:30.194310 kubelet[2382]: I0209 09:46:30.194177 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:30.194310 kubelet[2382]: I0209 09:46:30.194259 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e555a60c7ddf3d4ffd1c15bc9fc128da-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-254\" (UID: \"e555a60c7ddf3d4ffd1c15bc9fc128da\") " pod="kube-system/kube-scheduler-ip-172-31-20-254" Feb 9 09:46:30.194310 kubelet[2382]: I0209 09:46:30.194307 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fdab915e7f3f5411ab8c4b29ba5a471-ca-certs\") pod \"kube-apiserver-ip-172-31-20-254\" (UID: \"5fdab915e7f3f5411ab8c4b29ba5a471\") " pod="kube-system/kube-apiserver-ip-172-31-20-254" Feb 9 09:46:30.194622 kubelet[2382]: I0209 09:46:30.194405 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fdab915e7f3f5411ab8c4b29ba5a471-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-254\" (UID: \"5fdab915e7f3f5411ab8c4b29ba5a471\") " pod="kube-system/kube-apiserver-ip-172-31-20-254" Feb 9 09:46:30.194622 kubelet[2382]: I0209 09:46:30.194462 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fdab915e7f3f5411ab8c4b29ba5a471-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-254\" (UID: \"5fdab915e7f3f5411ab8c4b29ba5a471\") " pod="kube-system/kube-apiserver-ip-172-31-20-254" Feb 9 09:46:30.194622 kubelet[2382]: I0209 09:46:30.194506 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:30.194622 kubelet[2382]: I0209 09:46:30.194558 2382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:30.195065 kubelet[2382]: E0209 09:46:30.195019 2382 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-254?timeout=10s\": dial tcp 172.31.20.254:6443: connect: connection refused" interval="400ms" Feb 9 09:46:30.201081 systemd[1]: Created slice kubepods-burstable-pode555a60c7ddf3d4ffd1c15bc9fc128da.slice. Feb 9 09:46:30.301144 kubelet[2382]: I0209 09:46:30.301111 2382 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-254" Feb 9 09:46:30.301930 kubelet[2382]: E0209 09:46:30.301862 2382 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.254:6443/api/v1/nodes\": dial tcp 172.31.20.254:6443: connect: connection refused" node="ip-172-31-20-254" Feb 9 09:46:30.479448 env[1749]: time="2024-02-09T09:46:30.478760144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-254,Uid:5fdab915e7f3f5411ab8c4b29ba5a471,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:30.489430 env[1749]: time="2024-02-09T09:46:30.489187156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-254,Uid:b209577a7f1bcb3f74c8952110fcc01b,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:30.509684 env[1749]: time="2024-02-09T09:46:30.509625188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-254,Uid:e555a60c7ddf3d4ffd1c15bc9fc128da,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:30.596893 kubelet[2382]: E0209 09:46:30.596762 2382 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-254?timeout=10s\": dial tcp 172.31.20.254:6443: connect: connection refused" interval="800ms" Feb 9 09:46:30.705251 kubelet[2382]: I0209 09:46:30.704607 2382 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-254" Feb 9 09:46:30.705251 kubelet[2382]: E0209 09:46:30.705219 2382 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.254:6443/api/v1/nodes\": dial tcp 172.31.20.254:6443: connect: connection refused" node="ip-172-31-20-254" Feb 9 09:46:31.005475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628541102.mount: Deactivated successfully. Feb 9 09:46:31.010999 kubelet[2382]: W0209 09:46:31.010876 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.20.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.010999 kubelet[2382]: E0209 09:46:31.010946 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.017833 env[1749]: time="2024-02-09T09:46:31.017765257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.019854 env[1749]: time="2024-02-09T09:46:31.019797092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.024669 env[1749]: time="2024-02-09T09:46:31.024603749Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.027118 env[1749]: time="2024-02-09T09:46:31.027053384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.032608 env[1749]: time="2024-02-09T09:46:31.032552222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.036206 env[1749]: time="2024-02-09T09:46:31.036151592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.039647 env[1749]: time="2024-02-09T09:46:31.039548921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.042261 env[1749]: time="2024-02-09T09:46:31.042161552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.047707 env[1749]: time="2024-02-09T09:46:31.047648676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.050664 env[1749]: time="2024-02-09T09:46:31.050604331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.051941 kubelet[2382]: W0209 09:46:31.051793 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.20.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-254&limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.051941 kubelet[2382]: E0209 09:46:31.051887 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-254&limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.070717 env[1749]: time="2024-02-09T09:46:31.070658735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.076013 env[1749]: time="2024-02-09T09:46:31.075955281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:31.081128 env[1749]: time="2024-02-09T09:46:31.080982595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:31.081307 env[1749]: time="2024-02-09T09:46:31.081116305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:31.081307 env[1749]: time="2024-02-09T09:46:31.081144199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:31.082034 env[1749]: time="2024-02-09T09:46:31.081606361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4763ef746f4c7bedfd17663bf0c3eef473a889d36ab448a25cea229f7e0d28 pid=2421 runtime=io.containerd.runc.v2 Feb 9 09:46:31.125788 systemd[1]: Started cri-containerd-da4763ef746f4c7bedfd17663bf0c3eef473a889d36ab448a25cea229f7e0d28.scope. Feb 9 09:46:31.164384 env[1749]: time="2024-02-09T09:46:31.164223190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:31.164384 env[1749]: time="2024-02-09T09:46:31.164304088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:31.164751 env[1749]: time="2024-02-09T09:46:31.164331826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:31.165240 env[1749]: time="2024-02-09T09:46:31.165133760Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17e31fd1b8f031027a508c8b45d79c9908a1e6721b36b278bcae6a49ac0534bb pid=2453 runtime=io.containerd.runc.v2 Feb 9 09:46:31.165730 env[1749]: time="2024-02-09T09:46:31.165603412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:31.165730 env[1749]: time="2024-02-09T09:46:31.165691704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:31.166061 env[1749]: time="2024-02-09T09:46:31.165968845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:31.166532 env[1749]: time="2024-02-09T09:46:31.166435005Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/592a7d597836b8462fcec938795671c92f0750ebf52ee75ec1096d4f9354bb84 pid=2466 runtime=io.containerd.runc.v2 Feb 9 09:46:31.206737 systemd[1]: Started cri-containerd-17e31fd1b8f031027a508c8b45d79c9908a1e6721b36b278bcae6a49ac0534bb.scope. Feb 9 09:46:31.224804 systemd[1]: Started cri-containerd-592a7d597836b8462fcec938795671c92f0750ebf52ee75ec1096d4f9354bb84.scope. Feb 9 09:46:31.258642 env[1749]: time="2024-02-09T09:46:31.258455498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-254,Uid:e555a60c7ddf3d4ffd1c15bc9fc128da,Namespace:kube-system,Attempt:0,} returns sandbox id \"da4763ef746f4c7bedfd17663bf0c3eef473a889d36ab448a25cea229f7e0d28\"" Feb 9 09:46:31.273721 env[1749]: time="2024-02-09T09:46:31.272941676Z" level=info msg="CreateContainer within sandbox \"da4763ef746f4c7bedfd17663bf0c3eef473a889d36ab448a25cea229f7e0d28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:46:31.312028 env[1749]: time="2024-02-09T09:46:31.311939951Z" level=info msg="CreateContainer within sandbox \"da4763ef746f4c7bedfd17663bf0c3eef473a889d36ab448a25cea229f7e0d28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c\"" Feb 9 09:46:31.313017 env[1749]: time="2024-02-09T09:46:31.312946350Z" level=info msg="StartContainer for \"477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c\"" Feb 9 09:46:31.350789 kubelet[2382]: W0209 09:46:31.347928 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.20.254:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.350789 kubelet[2382]: E0209 09:46:31.348035 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.254:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.353324 env[1749]: time="2024-02-09T09:46:31.353263896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-254,Uid:5fdab915e7f3f5411ab8c4b29ba5a471,Namespace:kube-system,Attempt:0,} returns sandbox id \"592a7d597836b8462fcec938795671c92f0750ebf52ee75ec1096d4f9354bb84\"" Feb 9 09:46:31.359685 env[1749]: time="2024-02-09T09:46:31.359623866Z" level=info msg="CreateContainer within sandbox \"592a7d597836b8462fcec938795671c92f0750ebf52ee75ec1096d4f9354bb84\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:46:31.377861 kubelet[2382]: W0209 09:46:31.377745 2382 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.20.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.377861 kubelet[2382]: E0209 09:46:31.377859 2382 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.254:6443: connect: connection refused Feb 9 09:46:31.384577 systemd[1]: Started cri-containerd-477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c.scope. Feb 9 09:46:31.397733 kubelet[2382]: E0209 09:46:31.397681 2382 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-254?timeout=10s\": dial tcp 172.31.20.254:6443: connect: connection refused" interval="1.6s" Feb 9 09:46:31.403651 env[1749]: time="2024-02-09T09:46:31.403566156Z" level=info msg="CreateContainer within sandbox \"592a7d597836b8462fcec938795671c92f0750ebf52ee75ec1096d4f9354bb84\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a288cda1b88ee505b167ef5d530447be6f58189b81b251c810fce85f6675dc66\"" Feb 9 09:46:31.404982 env[1749]: time="2024-02-09T09:46:31.404912507Z" level=info msg="StartContainer for \"a288cda1b88ee505b167ef5d530447be6f58189b81b251c810fce85f6675dc66\"" Feb 9 09:46:31.429961 env[1749]: time="2024-02-09T09:46:31.429898012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-254,Uid:b209577a7f1bcb3f74c8952110fcc01b,Namespace:kube-system,Attempt:0,} returns sandbox id \"17e31fd1b8f031027a508c8b45d79c9908a1e6721b36b278bcae6a49ac0534bb\"" Feb 9 09:46:31.451515 env[1749]: time="2024-02-09T09:46:31.447611225Z" level=info msg="CreateContainer within sandbox \"17e31fd1b8f031027a508c8b45d79c9908a1e6721b36b278bcae6a49ac0534bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:46:31.478586 systemd[1]: Started cri-containerd-a288cda1b88ee505b167ef5d530447be6f58189b81b251c810fce85f6675dc66.scope. Feb 9 09:46:31.488430 env[1749]: time="2024-02-09T09:46:31.487235931Z" level=info msg="CreateContainer within sandbox \"17e31fd1b8f031027a508c8b45d79c9908a1e6721b36b278bcae6a49ac0534bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0\"" Feb 9 09:46:31.494754 env[1749]: time="2024-02-09T09:46:31.494656827Z" level=info msg="StartContainer for \"3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0\"" Feb 9 09:46:31.508111 kubelet[2382]: I0209 09:46:31.508055 2382 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-254" Feb 9 09:46:31.509018 kubelet[2382]: E0209 09:46:31.508870 2382 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.20.254:6443/api/v1/nodes\": dial tcp 172.31.20.254:6443: connect: connection refused" node="ip-172-31-20-254" Feb 9 09:46:31.561451 env[1749]: time="2024-02-09T09:46:31.558959962Z" level=info msg="StartContainer for \"477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c\" returns successfully" Feb 9 09:46:31.579923 systemd[1]: Started cri-containerd-3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0.scope. Feb 9 09:46:31.598599 env[1749]: time="2024-02-09T09:46:31.598507466Z" level=info msg="StartContainer for \"a288cda1b88ee505b167ef5d530447be6f58189b81b251c810fce85f6675dc66\" returns successfully" Feb 9 09:46:31.726262 env[1749]: time="2024-02-09T09:46:31.726195903Z" level=info msg="StartContainer for \"3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0\" returns successfully" Feb 9 09:46:33.112384 kubelet[2382]: I0209 09:46:33.112322 2382 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-254" Feb 9 09:46:36.165423 kubelet[2382]: E0209 09:46:36.165306 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-254\" not found" node="ip-172-31-20-254" Feb 9 09:46:36.200751 kubelet[2382]: I0209 09:46:36.200681 2382 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-20-254" Feb 9 09:46:36.981580 kubelet[2382]: I0209 09:46:36.981517 2382 apiserver.go:52] "Watching apiserver" Feb 9 09:46:36.991832 kubelet[2382]: I0209 09:46:36.991759 2382 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 09:46:37.053297 kubelet[2382]: I0209 09:46:37.053242 2382 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:46:39.053575 systemd[1]: Reloading. Feb 9 09:46:39.283722 /usr/lib/systemd/system-generators/torcx-generator[2671]: time="2024-02-09T09:46:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:46:39.283812 /usr/lib/systemd/system-generators/torcx-generator[2671]: time="2024-02-09T09:46:39Z" level=info msg="torcx already run" Feb 9 09:46:39.514450 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:46:39.514491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:46:39.568123 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:46:39.842044 systemd[1]: Stopping kubelet.service... Feb 9 09:46:39.854905 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:46:39.855730 systemd[1]: Stopped kubelet.service. Feb 9 09:46:39.855816 systemd[1]: kubelet.service: Consumed 1.701s CPU time. Feb 9 09:46:39.861280 systemd[1]: Started kubelet.service. Feb 9 09:46:40.001111 sudo[2737]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:46:40.001689 sudo[2737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:46:40.017326 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:40.018010 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:46:40.018146 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:46:40.018506 kubelet[2727]: I0209 09:46:40.018418 2727 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:46:40.029967 kubelet[2727]: I0209 09:46:40.029923 2727 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 09:46:40.030167 kubelet[2727]: I0209 09:46:40.030145 2727 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:46:40.030697 kubelet[2727]: I0209 09:46:40.030669 2727 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 09:46:40.033751 kubelet[2727]: I0209 09:46:40.033712 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:46:40.038764 kubelet[2727]: W0209 09:46:40.038726 2727 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:46:40.039278 kubelet[2727]: I0209 09:46:40.039111 2727 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:46:40.041179 kubelet[2727]: I0209 09:46:40.041115 2727 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:46:40.041956 kubelet[2727]: I0209 09:46:40.041926 2727 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:46:40.042255 kubelet[2727]: I0209 09:46:40.042229 2727 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:46:40.042581 kubelet[2727]: I0209 09:46:40.042553 2727 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:46:40.042730 kubelet[2727]: I0209 09:46:40.042709 2727 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 09:46:40.042907 kubelet[2727]: I0209 09:46:40.042885 2727 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:40.050050 kubelet[2727]: I0209 09:46:40.050012 2727 kubelet.go:405] "Attempting to sync node with API server" Feb 9 09:46:40.050285 kubelet[2727]: I0209 09:46:40.050263 2727 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:46:40.050467 kubelet[2727]: I0209 09:46:40.050443 2727 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:46:40.050612 kubelet[2727]: I0209 09:46:40.050590 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:46:40.073725 kubelet[2727]: I0209 09:46:40.073682 2727 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:46:40.075137 kubelet[2727]: I0209 09:46:40.075097 2727 server.go:1168] "Started kubelet" Feb 9 09:46:40.079126 kubelet[2727]: I0209 09:46:40.079075 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:46:40.088486 kubelet[2727]: I0209 09:46:40.079529 2727 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:46:40.093220 kubelet[2727]: I0209 09:46:40.093066 2727 server.go:461] "Adding debug handlers to kubelet server" Feb 9 09:46:40.095920 kubelet[2727]: I0209 09:46:40.095850 2727 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 09:46:40.144551 kubelet[2727]: I0209 09:46:40.096199 2727 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 09:46:40.144803 kubelet[2727]: I0209 09:46:40.079612 2727 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:46:40.145244 kubelet[2727]: E0209 09:46:40.108337 2727 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:46:40.145422 kubelet[2727]: E0209 09:46:40.145399 2727 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:46:40.253296 kubelet[2727]: I0209 09:46:40.253240 2727 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-20-254" Feb 9 09:46:40.321269 kubelet[2727]: I0209 09:46:40.319276 2727 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:46:40.323967 kubelet[2727]: I0209 09:46:40.323746 2727 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-20-254" Feb 9 09:46:40.324499 kubelet[2727]: I0209 09:46:40.324468 2727 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-20-254" Feb 9 09:46:40.328818 kubelet[2727]: I0209 09:46:40.328505 2727 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:46:40.328818 kubelet[2727]: I0209 09:46:40.328565 2727 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 09:46:40.328818 kubelet[2727]: I0209 09:46:40.328615 2727 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 09:46:40.328818 kubelet[2727]: E0209 09:46:40.328708 2727 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:46:40.428903 kubelet[2727]: E0209 09:46:40.428843 2727 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 09:46:40.551920 kubelet[2727]: I0209 09:46:40.551874 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:46:40.552456 kubelet[2727]: I0209 09:46:40.552423 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:46:40.552753 kubelet[2727]: I0209 09:46:40.552723 2727 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:46:40.554848 kubelet[2727]: I0209 09:46:40.554806 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:46:40.555089 kubelet[2727]: I0209 09:46:40.555064 2727 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:46:40.555211 kubelet[2727]: I0209 09:46:40.555191 2727 policy_none.go:49] "None policy: Start" Feb 9 09:46:40.558792 kubelet[2727]: I0209 09:46:40.558746 2727 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:46:40.559176 kubelet[2727]: I0209 09:46:40.559144 2727 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:46:40.560994 kubelet[2727]: I0209 09:46:40.560947 2727 state_mem.go:75] "Updated machine memory state" Feb 9 09:46:40.577505 kubelet[2727]: I0209 09:46:40.577464 2727 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:46:40.587696 kubelet[2727]: I0209 09:46:40.587658 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:46:40.629037 kubelet[2727]: I0209 09:46:40.628987 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:40.629385 kubelet[2727]: I0209 09:46:40.629311 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:40.631416 kubelet[2727]: I0209 09:46:40.629755 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:40.649744 kubelet[2727]: E0209 09:46:40.649692 2727 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-254\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-254" Feb 9 09:46:40.650300 kubelet[2727]: I0209 09:46:40.650082 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:40.650613 kubelet[2727]: I0209 09:46:40.650579 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:40.650843 kubelet[2727]: I0209 09:46:40.650815 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:40.651036 kubelet[2727]: I0209 09:46:40.651008 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fdab915e7f3f5411ab8c4b29ba5a471-ca-certs\") pod \"kube-apiserver-ip-172-31-20-254\" (UID: \"5fdab915e7f3f5411ab8c4b29ba5a471\") " pod="kube-system/kube-apiserver-ip-172-31-20-254" Feb 9 09:46:40.651216 kubelet[2727]: I0209 09:46:40.651191 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fdab915e7f3f5411ab8c4b29ba5a471-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-254\" (UID: \"5fdab915e7f3f5411ab8c4b29ba5a471\") " pod="kube-system/kube-apiserver-ip-172-31-20-254" Feb 9 09:46:40.651517 kubelet[2727]: I0209 09:46:40.651475 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fdab915e7f3f5411ab8c4b29ba5a471-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-254\" (UID: \"5fdab915e7f3f5411ab8c4b29ba5a471\") " pod="kube-system/kube-apiserver-ip-172-31-20-254" Feb 9 09:46:40.651774 kubelet[2727]: I0209 09:46:40.651744 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:40.651980 kubelet[2727]: I0209 09:46:40.651936 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e555a60c7ddf3d4ffd1c15bc9fc128da-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-254\" (UID: \"e555a60c7ddf3d4ffd1c15bc9fc128da\") " pod="kube-system/kube-scheduler-ip-172-31-20-254" Feb 9 09:46:40.652185 kubelet[2727]: I0209 09:46:40.652157 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b209577a7f1bcb3f74c8952110fcc01b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-254\" (UID: \"b209577a7f1bcb3f74c8952110fcc01b\") " pod="kube-system/kube-controller-manager-ip-172-31-20-254" Feb 9 09:46:41.057619 kubelet[2727]: I0209 09:46:41.057567 2727 apiserver.go:52] "Watching apiserver" Feb 9 09:46:41.145084 kubelet[2727]: I0209 09:46:41.145040 2727 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 09:46:41.156263 kubelet[2727]: I0209 09:46:41.156211 2727 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:46:41.222942 sudo[2737]: pam_unix(sudo:session): session closed for user root Feb 9 09:46:41.447471 update_engine[1735]: I0209 09:46:41.447405 1735 update_attempter.cc:509] Updating boot flags... Feb 9 09:46:41.501464 kubelet[2727]: I0209 09:46:41.501413 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-254" podStartSLOduration=1.501288674 podCreationTimestamp="2024-02-09 09:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:41.485603377 +0000 UTC m=+1.612590889" watchObservedRunningTime="2024-02-09 09:46:41.501288674 +0000 UTC m=+1.628276198" Feb 9 09:46:41.527166 kubelet[2727]: I0209 09:46:41.527051 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-254" podStartSLOduration=3.526970372 podCreationTimestamp="2024-02-09 09:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:41.526824661 +0000 UTC m=+1.653812173" watchObservedRunningTime="2024-02-09 09:46:41.526970372 +0000 UTC m=+1.653957872" Feb 9 09:46:41.527337 kubelet[2727]: I0209 09:46:41.527244 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-254" podStartSLOduration=1.5272119750000002 podCreationTimestamp="2024-02-09 09:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:41.502797018 +0000 UTC m=+1.629784518" watchObservedRunningTime="2024-02-09 09:46:41.527211975 +0000 UTC m=+1.654199487" Feb 9 09:46:42.648885 amazon-ssm-agent[1789]: 2024-02-09 09:46:42 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 09:46:43.281508 sudo[1969]: pam_unix(sudo:session): session closed for user root Feb 9 09:46:43.305660 sshd[1966]: pam_unix(sshd:session): session closed for user core Feb 9 09:46:43.311163 systemd-logind[1732]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:46:43.314276 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:46:43.314654 systemd[1]: session-5.scope: Consumed 8.540s CPU time. Feb 9 09:46:43.316039 systemd[1]: sshd@4-172.31.20.254:22-139.178.89.65:35646.service: Deactivated successfully. Feb 9 09:46:43.318896 systemd-logind[1732]: Removed session 5. Feb 9 09:46:53.775579 kubelet[2727]: I0209 09:46:53.775543 2727 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:46:53.777195 env[1749]: time="2024-02-09T09:46:53.777105622Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:46:53.777848 kubelet[2727]: I0209 09:46:53.777636 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:46:53.865943 kubelet[2727]: I0209 09:46:53.865884 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:53.877228 systemd[1]: Created slice kubepods-besteffort-podae8b1f73_c046_4383_8ac4_2e8d7a8f3861.slice. Feb 9 09:46:53.885928 kubelet[2727]: W0209 09:46:53.885871 2727 reflector.go:533] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-20-254" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-254' and this object Feb 9 09:46:53.885928 kubelet[2727]: E0209 09:46:53.885930 2727 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-20-254" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-254' and this object Feb 9 09:46:53.886441 kubelet[2727]: W0209 09:46:53.886402 2727 reflector.go:533] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-254" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-254' and this object Feb 9 09:46:53.886580 kubelet[2727]: E0209 09:46:53.886444 2727 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-254" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-254' and this object Feb 9 09:46:53.896140 kubelet[2727]: I0209 09:46:53.896054 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:53.911582 systemd[1]: Created slice kubepods-burstable-pod03aac178_ecdd_49a7_86ff_52f2ae1c5710.slice. Feb 9 09:46:53.932624 kubelet[2727]: I0209 09:46:53.932568 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-bpf-maps\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.932844 kubelet[2727]: I0209 09:46:53.932666 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-cgroup\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.932844 kubelet[2727]: I0209 09:46:53.932741 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-xtables-lock\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.932844 kubelet[2727]: I0209 09:46:53.932795 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cni-path\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933058 kubelet[2727]: I0209 09:46:53.932870 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-etc-cni-netd\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933058 kubelet[2727]: I0209 09:46:53.932939 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-lib-modules\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933058 kubelet[2727]: I0209 09:46:53.933038 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-net\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933336 kubelet[2727]: I0209 09:46:53.933113 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03aac178-ecdd-49a7-86ff-52f2ae1c5710-clustermesh-secrets\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933336 kubelet[2727]: I0209 09:46:53.933182 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-run\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933336 kubelet[2727]: I0209 09:46:53.933229 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hubble-tls\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933336 kubelet[2727]: I0209 09:46:53.933300 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae8b1f73-c046-4383-8ac4-2e8d7a8f3861-kube-proxy\") pod \"kube-proxy-b59kh\" (UID: \"ae8b1f73-c046-4383-8ac4-2e8d7a8f3861\") " pod="kube-system/kube-proxy-b59kh" Feb 9 09:46:53.933630 kubelet[2727]: I0209 09:46:53.933407 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hostproc\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933630 kubelet[2727]: I0209 09:46:53.933478 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-config-path\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933630 kubelet[2727]: I0209 09:46:53.933551 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6cx6\" (UniqueName: \"kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-kube-api-access-r6cx6\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:53.933630 kubelet[2727]: I0209 09:46:53.933601 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae8b1f73-c046-4383-8ac4-2e8d7a8f3861-lib-modules\") pod \"kube-proxy-b59kh\" (UID: \"ae8b1f73-c046-4383-8ac4-2e8d7a8f3861\") " pod="kube-system/kube-proxy-b59kh" Feb 9 09:46:53.933866 kubelet[2727]: I0209 09:46:53.933671 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl9n4\" (UniqueName: \"kubernetes.io/projected/ae8b1f73-c046-4383-8ac4-2e8d7a8f3861-kube-api-access-xl9n4\") pod \"kube-proxy-b59kh\" (UID: \"ae8b1f73-c046-4383-8ac4-2e8d7a8f3861\") " pod="kube-system/kube-proxy-b59kh" Feb 9 09:46:53.933866 kubelet[2727]: I0209 09:46:53.933745 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae8b1f73-c046-4383-8ac4-2e8d7a8f3861-xtables-lock\") pod \"kube-proxy-b59kh\" (UID: \"ae8b1f73-c046-4383-8ac4-2e8d7a8f3861\") " pod="kube-system/kube-proxy-b59kh" Feb 9 09:46:53.933866 kubelet[2727]: I0209 09:46:53.933821 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-kernel\") pod \"cilium-flrqq\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " pod="kube-system/cilium-flrqq" Feb 9 09:46:54.109817 kubelet[2727]: I0209 09:46:54.109654 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:46:54.125512 systemd[1]: Created slice kubepods-besteffort-podb282a02d_267b_49f6_844c_3a7c201ddc94.slice. Feb 9 09:46:54.135539 kubelet[2727]: I0209 09:46:54.135488 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvmjg\" (UniqueName: \"kubernetes.io/projected/b282a02d-267b-49f6-844c-3a7c201ddc94-kube-api-access-tvmjg\") pod \"cilium-operator-574c4bb98d-mwjkd\" (UID: \"b282a02d-267b-49f6-844c-3a7c201ddc94\") " pod="kube-system/cilium-operator-574c4bb98d-mwjkd" Feb 9 09:46:54.135746 kubelet[2727]: I0209 09:46:54.135601 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b282a02d-267b-49f6-844c-3a7c201ddc94-cilium-config-path\") pod \"cilium-operator-574c4bb98d-mwjkd\" (UID: \"b282a02d-267b-49f6-844c-3a7c201ddc94\") " pod="kube-system/cilium-operator-574c4bb98d-mwjkd" Feb 9 09:46:55.032018 env[1749]: time="2024-02-09T09:46:55.031321121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mwjkd,Uid:b282a02d-267b-49f6-844c-3a7c201ddc94,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:55.035612 kubelet[2727]: E0209 09:46:55.035441 2727 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:46:55.036140 kubelet[2727]: E0209 09:46:55.035806 2727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae8b1f73-c046-4383-8ac4-2e8d7a8f3861-kube-proxy podName:ae8b1f73-c046-4383-8ac4-2e8d7a8f3861 nodeName:}" failed. No retries permitted until 2024-02-09 09:46:55.535766533 +0000 UTC m=+15.662754021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ae8b1f73-c046-4383-8ac4-2e8d7a8f3861-kube-proxy") pod "kube-proxy-b59kh" (UID: "ae8b1f73-c046-4383-8ac4-2e8d7a8f3861") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:46:55.080627 env[1749]: time="2024-02-09T09:46:55.080480167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:55.080946 env[1749]: time="2024-02-09T09:46:55.080867686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:55.081149 env[1749]: time="2024-02-09T09:46:55.081084985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:55.081740 env[1749]: time="2024-02-09T09:46:55.081649193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd pid=2994 runtime=io.containerd.runc.v2 Feb 9 09:46:55.119461 systemd[1]: run-containerd-runc-k8s.io-bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd-runc.PfWvVS.mount: Deactivated successfully. Feb 9 09:46:55.120890 env[1749]: time="2024-02-09T09:46:55.120743829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flrqq,Uid:03aac178-ecdd-49a7-86ff-52f2ae1c5710,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:55.127692 systemd[1]: Started cri-containerd-bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd.scope. Feb 9 09:46:55.179279 env[1749]: time="2024-02-09T09:46:55.178910958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:55.179279 env[1749]: time="2024-02-09T09:46:55.178994796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:55.179279 env[1749]: time="2024-02-09T09:46:55.179021318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:55.179946 env[1749]: time="2024-02-09T09:46:55.179849208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7 pid=3027 runtime=io.containerd.runc.v2 Feb 9 09:46:55.206275 systemd[1]: Started cri-containerd-dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7.scope. Feb 9 09:46:55.245164 env[1749]: time="2024-02-09T09:46:55.245037465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mwjkd,Uid:b282a02d-267b-49f6-844c-3a7c201ddc94,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\"" Feb 9 09:46:55.255239 env[1749]: time="2024-02-09T09:46:55.255173099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:46:55.287234 env[1749]: time="2024-02-09T09:46:55.285789837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flrqq,Uid:03aac178-ecdd-49a7-86ff-52f2ae1c5710,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\"" Feb 9 09:46:55.701975 env[1749]: time="2024-02-09T09:46:55.701880217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b59kh,Uid:ae8b1f73-c046-4383-8ac4-2e8d7a8f3861,Namespace:kube-system,Attempt:0,}" Feb 9 09:46:55.725892 env[1749]: time="2024-02-09T09:46:55.725748774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:46:55.725892 env[1749]: time="2024-02-09T09:46:55.725828064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:46:55.725892 env[1749]: time="2024-02-09T09:46:55.725854658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:46:55.726621 env[1749]: time="2024-02-09T09:46:55.726527365Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bbed67242ae78da5debf72f22a30f62bbfea703c400a30807e8c5796faaf907 pid=3076 runtime=io.containerd.runc.v2 Feb 9 09:46:55.750085 systemd[1]: Started cri-containerd-6bbed67242ae78da5debf72f22a30f62bbfea703c400a30807e8c5796faaf907.scope. Feb 9 09:46:55.801505 env[1749]: time="2024-02-09T09:46:55.801433579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b59kh,Uid:ae8b1f73-c046-4383-8ac4-2e8d7a8f3861,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bbed67242ae78da5debf72f22a30f62bbfea703c400a30807e8c5796faaf907\"" Feb 9 09:46:55.806638 env[1749]: time="2024-02-09T09:46:55.806488057Z" level=info msg="CreateContainer within sandbox \"6bbed67242ae78da5debf72f22a30f62bbfea703c400a30807e8c5796faaf907\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:46:55.837661 env[1749]: time="2024-02-09T09:46:55.837587073Z" level=info msg="CreateContainer within sandbox \"6bbed67242ae78da5debf72f22a30f62bbfea703c400a30807e8c5796faaf907\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29aa9931b1431898b0dc01bf700da234f2ee33f610bad425124808bfca4e344b\"" Feb 9 09:46:55.838969 env[1749]: time="2024-02-09T09:46:55.838885924Z" level=info msg="StartContainer for \"29aa9931b1431898b0dc01bf700da234f2ee33f610bad425124808bfca4e344b\"" Feb 9 09:46:55.875214 systemd[1]: Started cri-containerd-29aa9931b1431898b0dc01bf700da234f2ee33f610bad425124808bfca4e344b.scope. Feb 9 09:46:55.948396 env[1749]: time="2024-02-09T09:46:55.948308849Z" level=info msg="StartContainer for \"29aa9931b1431898b0dc01bf700da234f2ee33f610bad425124808bfca4e344b\" returns successfully" Feb 9 09:46:56.948665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440979371.mount: Deactivated successfully. Feb 9 09:46:57.854986 env[1749]: time="2024-02-09T09:46:57.854917415Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:57.858360 env[1749]: time="2024-02-09T09:46:57.858254292Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:57.863512 env[1749]: time="2024-02-09T09:46:57.863456631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:46:57.866105 env[1749]: time="2024-02-09T09:46:57.866040856Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:46:57.871015 env[1749]: time="2024-02-09T09:46:57.870950495Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:46:57.874326 env[1749]: time="2024-02-09T09:46:57.874197787Z" level=info msg="CreateContainer within sandbox \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:46:57.902563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98767775.mount: Deactivated successfully. Feb 9 09:46:57.915055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695080734.mount: Deactivated successfully. Feb 9 09:46:57.919874 env[1749]: time="2024-02-09T09:46:57.919814018Z" level=info msg="CreateContainer within sandbox \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\"" Feb 9 09:46:57.921122 env[1749]: time="2024-02-09T09:46:57.921067884Z" level=info msg="StartContainer for \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\"" Feb 9 09:46:57.962766 systemd[1]: Started cri-containerd-723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785.scope. Feb 9 09:46:58.022657 env[1749]: time="2024-02-09T09:46:58.022594702Z" level=info msg="StartContainer for \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\" returns successfully" Feb 9 09:46:58.530956 kubelet[2727]: I0209 09:46:58.530906 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b59kh" podStartSLOduration=5.530815181 podCreationTimestamp="2024-02-09 09:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:46:56.505731799 +0000 UTC m=+16.632719323" watchObservedRunningTime="2024-02-09 09:46:58.530815181 +0000 UTC m=+18.657802705" Feb 9 09:47:05.161321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487288161.mount: Deactivated successfully. Feb 9 09:47:09.184556 env[1749]: time="2024-02-09T09:47:09.184474048Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:09.187491 env[1749]: time="2024-02-09T09:47:09.187432207Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:09.190785 env[1749]: time="2024-02-09T09:47:09.190723946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:09.192127 env[1749]: time="2024-02-09T09:47:09.192063879Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:47:09.197522 env[1749]: time="2024-02-09T09:47:09.197463850Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:47:09.213868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917790459.mount: Deactivated successfully. Feb 9 09:47:09.224283 env[1749]: time="2024-02-09T09:47:09.224205922Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\"" Feb 9 09:47:09.228470 env[1749]: time="2024-02-09T09:47:09.225372064Z" level=info msg="StartContainer for \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\"" Feb 9 09:47:09.273792 systemd[1]: Started cri-containerd-e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5.scope. Feb 9 09:47:09.346265 env[1749]: time="2024-02-09T09:47:09.346185529Z" level=info msg="StartContainer for \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\" returns successfully" Feb 9 09:47:09.363132 systemd[1]: cri-containerd-e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5.scope: Deactivated successfully. Feb 9 09:47:09.555284 kubelet[2727]: I0209 09:47:09.555125 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-mwjkd" podStartSLOduration=12.936320379 podCreationTimestamp="2024-02-09 09:46:54 +0000 UTC" firstStartedPulling="2024-02-09 09:46:55.248532566 +0000 UTC m=+15.375520066" lastFinishedPulling="2024-02-09 09:46:57.867253727 +0000 UTC m=+17.994241239" observedRunningTime="2024-02-09 09:46:58.538443521 +0000 UTC m=+18.665431045" watchObservedRunningTime="2024-02-09 09:47:09.555041552 +0000 UTC m=+29.682029076" Feb 9 09:47:10.209973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5-rootfs.mount: Deactivated successfully. Feb 9 09:47:10.346319 env[1749]: time="2024-02-09T09:47:10.346220868Z" level=info msg="shim disconnected" id=e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5 Feb 9 09:47:10.346932 env[1749]: time="2024-02-09T09:47:10.346321529Z" level=warning msg="cleaning up after shim disconnected" id=e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5 namespace=k8s.io Feb 9 09:47:10.346932 env[1749]: time="2024-02-09T09:47:10.346380787Z" level=info msg="cleaning up dead shim" Feb 9 09:47:10.360282 env[1749]: time="2024-02-09T09:47:10.360203810Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3354 runtime=io.containerd.runc.v2\n" Feb 9 09:47:10.534877 env[1749]: time="2024-02-09T09:47:10.533825872Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:47:10.575258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062640129.mount: Deactivated successfully. Feb 9 09:47:10.576452 env[1749]: time="2024-02-09T09:47:10.576395616Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\"" Feb 9 09:47:10.577577 env[1749]: time="2024-02-09T09:47:10.577501394Z" level=info msg="StartContainer for \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\"" Feb 9 09:47:10.619367 systemd[1]: Started cri-containerd-a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261.scope. Feb 9 09:47:10.681237 env[1749]: time="2024-02-09T09:47:10.681175735Z" level=info msg="StartContainer for \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\" returns successfully" Feb 9 09:47:10.711850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:47:10.712330 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:47:10.713722 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:47:10.719737 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:10.732208 systemd[1]: cri-containerd-a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261.scope: Deactivated successfully. Feb 9 09:47:10.745715 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:10.802956 env[1749]: time="2024-02-09T09:47:10.801799743Z" level=info msg="shim disconnected" id=a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261 Feb 9 09:47:10.803274 env[1749]: time="2024-02-09T09:47:10.803232199Z" level=warning msg="cleaning up after shim disconnected" id=a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261 namespace=k8s.io Feb 9 09:47:10.803418 env[1749]: time="2024-02-09T09:47:10.803390066Z" level=info msg="cleaning up dead shim" Feb 9 09:47:10.816766 env[1749]: time="2024-02-09T09:47:10.816710374Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3422 runtime=io.containerd.runc.v2\n" Feb 9 09:47:11.209507 systemd[1]: run-containerd-runc-k8s.io-a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261-runc.3XcV1g.mount: Deactivated successfully. Feb 9 09:47:11.209694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261-rootfs.mount: Deactivated successfully. Feb 9 09:47:11.536950 env[1749]: time="2024-02-09T09:47:11.536767816Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:47:11.572644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262136179.mount: Deactivated successfully. Feb 9 09:47:11.584110 env[1749]: time="2024-02-09T09:47:11.584036912Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\"" Feb 9 09:47:11.584960 env[1749]: time="2024-02-09T09:47:11.584875917Z" level=info msg="StartContainer for \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\"" Feb 9 09:47:11.627920 systemd[1]: Started cri-containerd-216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66.scope. Feb 9 09:47:11.694961 env[1749]: time="2024-02-09T09:47:11.694881527Z" level=info msg="StartContainer for \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\" returns successfully" Feb 9 09:47:11.700630 systemd[1]: cri-containerd-216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66.scope: Deactivated successfully. Feb 9 09:47:11.750550 env[1749]: time="2024-02-09T09:47:11.750481736Z" level=info msg="shim disconnected" id=216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66 Feb 9 09:47:11.750835 env[1749]: time="2024-02-09T09:47:11.750551663Z" level=warning msg="cleaning up after shim disconnected" id=216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66 namespace=k8s.io Feb 9 09:47:11.750835 env[1749]: time="2024-02-09T09:47:11.750574320Z" level=info msg="cleaning up dead shim" Feb 9 09:47:11.767024 env[1749]: time="2024-02-09T09:47:11.766928612Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3478 runtime=io.containerd.runc.v2\n" Feb 9 09:47:12.209482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66-rootfs.mount: Deactivated successfully. Feb 9 09:47:12.543936 env[1749]: time="2024-02-09T09:47:12.541203073Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:47:12.576190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777436699.mount: Deactivated successfully. Feb 9 09:47:12.594190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517147125.mount: Deactivated successfully. Feb 9 09:47:12.604149 env[1749]: time="2024-02-09T09:47:12.603970491Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\"" Feb 9 09:47:12.607095 env[1749]: time="2024-02-09T09:47:12.605487596Z" level=info msg="StartContainer for \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\"" Feb 9 09:47:12.634887 systemd[1]: Started cri-containerd-da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b.scope. Feb 9 09:47:12.699044 systemd[1]: cri-containerd-da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b.scope: Deactivated successfully. Feb 9 09:47:12.702274 env[1749]: time="2024-02-09T09:47:12.702113036Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03aac178_ecdd_49a7_86ff_52f2ae1c5710.slice/cri-containerd-da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b.scope/memory.events\": no such file or directory" Feb 9 09:47:12.707758 env[1749]: time="2024-02-09T09:47:12.707700499Z" level=info msg="StartContainer for \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\" returns successfully" Feb 9 09:47:12.757398 env[1749]: time="2024-02-09T09:47:12.757302849Z" level=info msg="shim disconnected" id=da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b Feb 9 09:47:12.757767 env[1749]: time="2024-02-09T09:47:12.757731807Z" level=warning msg="cleaning up after shim disconnected" id=da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b namespace=k8s.io Feb 9 09:47:12.757890 env[1749]: time="2024-02-09T09:47:12.757861989Z" level=info msg="cleaning up dead shim" Feb 9 09:47:12.773633 env[1749]: time="2024-02-09T09:47:12.773576302Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3535 runtime=io.containerd.runc.v2\n" Feb 9 09:47:13.553466 env[1749]: time="2024-02-09T09:47:13.553399535Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:47:13.593001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937402904.mount: Deactivated successfully. Feb 9 09:47:13.607630 env[1749]: time="2024-02-09T09:47:13.607524941Z" level=info msg="CreateContainer within sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\"" Feb 9 09:47:13.608828 env[1749]: time="2024-02-09T09:47:13.608775453Z" level=info msg="StartContainer for \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\"" Feb 9 09:47:13.646006 systemd[1]: Started cri-containerd-9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7.scope. Feb 9 09:47:13.718934 env[1749]: time="2024-02-09T09:47:13.718855272Z" level=info msg="StartContainer for \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\" returns successfully" Feb 9 09:47:13.912390 kubelet[2727]: I0209 09:47:13.911399 2727 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:47:13.913387 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:13.982807 kubelet[2727]: I0209 09:47:13.982749 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:47:13.995670 systemd[1]: Created slice kubepods-burstable-podc3be1522_5ac8_4172_90e6_63e949a39ca8.slice. Feb 9 09:47:14.002232 kubelet[2727]: I0209 09:47:14.002173 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:47:14.013685 systemd[1]: Created slice kubepods-burstable-pod03bfcd62_4271_4a55_aed4_98061339bd10.slice. Feb 9 09:47:14.039509 kubelet[2727]: W0209 09:47:14.039455 2727 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-20-254" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-254' and this object Feb 9 09:47:14.039509 kubelet[2727]: E0209 09:47:14.039513 2727 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-20-254" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-254' and this object Feb 9 09:47:14.082239 kubelet[2727]: I0209 09:47:14.082174 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7fm\" (UniqueName: \"kubernetes.io/projected/c3be1522-5ac8-4172-90e6-63e949a39ca8-kube-api-access-bg7fm\") pod \"coredns-5d78c9869d-prfdn\" (UID: \"c3be1522-5ac8-4172-90e6-63e949a39ca8\") " pod="kube-system/coredns-5d78c9869d-prfdn" Feb 9 09:47:14.082474 kubelet[2727]: I0209 09:47:14.082253 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03bfcd62-4271-4a55-aed4-98061339bd10-config-volume\") pod \"coredns-5d78c9869d-7jbtg\" (UID: \"03bfcd62-4271-4a55-aed4-98061339bd10\") " pod="kube-system/coredns-5d78c9869d-7jbtg" Feb 9 09:47:14.082474 kubelet[2727]: I0209 09:47:14.082301 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3be1522-5ac8-4172-90e6-63e949a39ca8-config-volume\") pod \"coredns-5d78c9869d-prfdn\" (UID: \"c3be1522-5ac8-4172-90e6-63e949a39ca8\") " pod="kube-system/coredns-5d78c9869d-prfdn" Feb 9 09:47:14.082474 kubelet[2727]: I0209 09:47:14.082375 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmc8\" (UniqueName: \"kubernetes.io/projected/03bfcd62-4271-4a55-aed4-98061339bd10-kube-api-access-nmmc8\") pod \"coredns-5d78c9869d-7jbtg\" (UID: \"03bfcd62-4271-4a55-aed4-98061339bd10\") " pod="kube-system/coredns-5d78c9869d-7jbtg" Feb 9 09:47:14.627412 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:15.183901 kubelet[2727]: E0209 09:47:15.183840 2727 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:47:15.184639 kubelet[2727]: E0209 09:47:15.183948 2727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3be1522-5ac8-4172-90e6-63e949a39ca8-config-volume podName:c3be1522-5ac8-4172-90e6-63e949a39ca8 nodeName:}" failed. No retries permitted until 2024-02-09 09:47:15.683921236 +0000 UTC m=+35.810908736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c3be1522-5ac8-4172-90e6-63e949a39ca8-config-volume") pod "coredns-5d78c9869d-prfdn" (UID: "c3be1522-5ac8-4172-90e6-63e949a39ca8") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:47:15.184639 kubelet[2727]: E0209 09:47:15.184258 2727 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:47:15.184639 kubelet[2727]: E0209 09:47:15.184317 2727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03bfcd62-4271-4a55-aed4-98061339bd10-config-volume podName:03bfcd62-4271-4a55-aed4-98061339bd10 nodeName:}" failed. No retries permitted until 2024-02-09 09:47:15.6842981 +0000 UTC m=+35.811285600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03bfcd62-4271-4a55-aed4-98061339bd10-config-volume") pod "coredns-5d78c9869d-7jbtg" (UID: "03bfcd62-4271-4a55-aed4-98061339bd10") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:47:15.805309 env[1749]: time="2024-02-09T09:47:15.805213151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-prfdn,Uid:c3be1522-5ac8-4172-90e6-63e949a39ca8,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:15.821404 env[1749]: time="2024-02-09T09:47:15.821292901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-7jbtg,Uid:03bfcd62-4271-4a55-aed4-98061339bd10,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:16.434706 (udev-worker)[3638]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:16.435197 (udev-worker)[3701]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:16.438645 systemd-networkd[1548]: cilium_host: Link UP Feb 9 09:47:16.448678 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:47:16.448808 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:47:16.448992 systemd-networkd[1548]: cilium_net: Link UP Feb 9 09:47:16.449439 systemd-networkd[1548]: cilium_net: Gained carrier Feb 9 09:47:16.449759 systemd-networkd[1548]: cilium_host: Gained carrier Feb 9 09:47:16.476812 systemd-networkd[1548]: cilium_host: Gained IPv6LL Feb 9 09:47:16.635088 (udev-worker)[3706]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:16.641239 systemd-networkd[1548]: cilium_vxlan: Link UP Feb 9 09:47:16.641253 systemd-networkd[1548]: cilium_vxlan: Gained carrier Feb 9 09:47:17.110451 kernel: NET: Registered PF_ALG protocol family Feb 9 09:47:17.341619 systemd-networkd[1548]: cilium_net: Gained IPv6LL Feb 9 09:47:18.237510 systemd-networkd[1548]: cilium_vxlan: Gained IPv6LL Feb 9 09:47:18.478598 (udev-worker)[3705]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:18.503772 systemd-networkd[1548]: lxc_health: Link UP Feb 9 09:47:18.518388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:47:18.519093 systemd-networkd[1548]: lxc_health: Gained carrier Feb 9 09:47:18.907589 systemd-networkd[1548]: lxcfed6e581c51f: Link UP Feb 9 09:47:18.916392 kernel: eth0: renamed from tmpcd576 Feb 9 09:47:18.928122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfed6e581c51f: link becomes ready Feb 9 09:47:18.927534 systemd-networkd[1548]: lxcfed6e581c51f: Gained carrier Feb 9 09:47:18.941904 systemd-networkd[1548]: lxc0cc6592a4f0c: Link UP Feb 9 09:47:18.955394 kernel: eth0: renamed from tmp2150b Feb 9 09:47:18.973954 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0cc6592a4f0c: link becomes ready Feb 9 09:47:18.973488 systemd-networkd[1548]: lxc0cc6592a4f0c: Gained carrier Feb 9 09:47:19.153769 kubelet[2727]: I0209 09:47:19.153724 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-flrqq" podStartSLOduration=12.250594141 podCreationTimestamp="2024-02-09 09:46:53 +0000 UTC" firstStartedPulling="2024-02-09 09:46:55.289430208 +0000 UTC m=+15.416417708" lastFinishedPulling="2024-02-09 09:47:09.192480598 +0000 UTC m=+29.319468086" observedRunningTime="2024-02-09 09:47:14.58176204 +0000 UTC m=+34.708749540" watchObservedRunningTime="2024-02-09 09:47:19.153644519 +0000 UTC m=+39.280632019" Feb 9 09:47:19.645543 systemd-networkd[1548]: lxc_health: Gained IPv6LL Feb 9 09:47:20.093517 systemd-networkd[1548]: lxc0cc6592a4f0c: Gained IPv6LL Feb 9 09:47:20.541552 systemd-networkd[1548]: lxcfed6e581c51f: Gained IPv6LL Feb 9 09:47:27.193298 systemd[1]: Started sshd@5-172.31.20.254:22-139.178.89.65:45456.service. Feb 9 09:47:27.373625 sshd[4072]: Accepted publickey for core from 139.178.89.65 port 45456 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:27.377456 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:27.386454 systemd-logind[1732]: New session 6 of user core. Feb 9 09:47:27.387839 systemd[1]: Started session-6.scope. Feb 9 09:47:27.623566 env[1749]: time="2024-02-09T09:47:27.623336132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:27.624239 env[1749]: time="2024-02-09T09:47:27.624146364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:27.626573 env[1749]: time="2024-02-09T09:47:27.626456120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:27.627120 env[1749]: time="2024-02-09T09:47:27.627059728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2150be54627ccd2d102cf91ae3ebc1c8b13dadc14b3441645b3d51a58284f2f8 pid=4096 runtime=io.containerd.runc.v2 Feb 9 09:47:27.645979 env[1749]: time="2024-02-09T09:47:27.644079950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:27.645979 env[1749]: time="2024-02-09T09:47:27.644177837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:27.645979 env[1749]: time="2024-02-09T09:47:27.644205690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:27.645979 env[1749]: time="2024-02-09T09:47:27.644504632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd5763bc36b943c738f9a4cc59e5d3873b11c0ee7ee01acf490993461278e71b pid=4103 runtime=io.containerd.runc.v2 Feb 9 09:47:27.720500 systemd[1]: run-containerd-runc-k8s.io-cd5763bc36b943c738f9a4cc59e5d3873b11c0ee7ee01acf490993461278e71b-runc.FFQz5a.mount: Deactivated successfully. Feb 9 09:47:27.729752 systemd[1]: run-containerd-runc-k8s.io-2150be54627ccd2d102cf91ae3ebc1c8b13dadc14b3441645b3d51a58284f2f8-runc.XuvONa.mount: Deactivated successfully. Feb 9 09:47:27.740554 systemd[1]: Started cri-containerd-2150be54627ccd2d102cf91ae3ebc1c8b13dadc14b3441645b3d51a58284f2f8.scope. Feb 9 09:47:27.748016 systemd[1]: Started cri-containerd-cd5763bc36b943c738f9a4cc59e5d3873b11c0ee7ee01acf490993461278e71b.scope. Feb 9 09:47:27.802499 sshd[4072]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:27.808518 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:47:27.810051 systemd[1]: sshd@5-172.31.20.254:22-139.178.89.65:45456.service: Deactivated successfully. Feb 9 09:47:27.811567 systemd-logind[1732]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:47:27.813950 systemd-logind[1732]: Removed session 6. Feb 9 09:47:27.897187 env[1749]: time="2024-02-09T09:47:27.897124887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-prfdn,Uid:c3be1522-5ac8-4172-90e6-63e949a39ca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd5763bc36b943c738f9a4cc59e5d3873b11c0ee7ee01acf490993461278e71b\"" Feb 9 09:47:27.904578 env[1749]: time="2024-02-09T09:47:27.904511765Z" level=info msg="CreateContainer within sandbox \"cd5763bc36b943c738f9a4cc59e5d3873b11c0ee7ee01acf490993461278e71b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:47:27.945300 env[1749]: time="2024-02-09T09:47:27.945231997Z" level=info msg="CreateContainer within sandbox \"cd5763bc36b943c738f9a4cc59e5d3873b11c0ee7ee01acf490993461278e71b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"998ad6124fb342b307c5c857331fa3d0e820e5275d7b48b65b0df06ce031cec8\"" Feb 9 09:47:27.947698 env[1749]: time="2024-02-09T09:47:27.947611499Z" level=info msg="StartContainer for \"998ad6124fb342b307c5c857331fa3d0e820e5275d7b48b65b0df06ce031cec8\"" Feb 9 09:47:27.960393 env[1749]: time="2024-02-09T09:47:27.960196188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-7jbtg,Uid:03bfcd62-4271-4a55-aed4-98061339bd10,Namespace:kube-system,Attempt:0,} returns sandbox id \"2150be54627ccd2d102cf91ae3ebc1c8b13dadc14b3441645b3d51a58284f2f8\"" Feb 9 09:47:27.971146 env[1749]: time="2024-02-09T09:47:27.968302958Z" level=info msg="CreateContainer within sandbox \"2150be54627ccd2d102cf91ae3ebc1c8b13dadc14b3441645b3d51a58284f2f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:47:27.996910 env[1749]: time="2024-02-09T09:47:27.996824567Z" level=info msg="CreateContainer within sandbox \"2150be54627ccd2d102cf91ae3ebc1c8b13dadc14b3441645b3d51a58284f2f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef31e1777c23b2394aadae230bffffea575144aa62aea4c0fefda3dafde952aa\"" Feb 9 09:47:27.997258 systemd[1]: Started cri-containerd-998ad6124fb342b307c5c857331fa3d0e820e5275d7b48b65b0df06ce031cec8.scope. Feb 9 09:47:28.002420 env[1749]: time="2024-02-09T09:47:28.001340934Z" level=info msg="StartContainer for \"ef31e1777c23b2394aadae230bffffea575144aa62aea4c0fefda3dafde952aa\"" Feb 9 09:47:28.055172 systemd[1]: Started cri-containerd-ef31e1777c23b2394aadae230bffffea575144aa62aea4c0fefda3dafde952aa.scope. Feb 9 09:47:28.163513 env[1749]: time="2024-02-09T09:47:28.162700422Z" level=info msg="StartContainer for \"998ad6124fb342b307c5c857331fa3d0e820e5275d7b48b65b0df06ce031cec8\" returns successfully" Feb 9 09:47:28.187653 env[1749]: time="2024-02-09T09:47:28.187574917Z" level=info msg="StartContainer for \"ef31e1777c23b2394aadae230bffffea575144aa62aea4c0fefda3dafde952aa\" returns successfully" Feb 9 09:47:28.611790 kubelet[2727]: I0209 09:47:28.611638 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-7jbtg" podStartSLOduration=34.611561419 podCreationTimestamp="2024-02-09 09:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:28.606876376 +0000 UTC m=+48.733863900" watchObservedRunningTime="2024-02-09 09:47:28.611561419 +0000 UTC m=+48.738548931" Feb 9 09:47:28.631495 kubelet[2727]: I0209 09:47:28.631430 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-prfdn" podStartSLOduration=34.631316156 podCreationTimestamp="2024-02-09 09:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:28.629564648 +0000 UTC m=+48.756552160" watchObservedRunningTime="2024-02-09 09:47:28.631316156 +0000 UTC m=+48.758303668" Feb 9 09:47:32.832981 systemd[1]: Started sshd@6-172.31.20.254:22-139.178.89.65:48072.service. Feb 9 09:47:33.005919 sshd[4251]: Accepted publickey for core from 139.178.89.65 port 48072 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:33.009013 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:33.016439 systemd-logind[1732]: New session 7 of user core. Feb 9 09:47:33.017747 systemd[1]: Started session-7.scope. Feb 9 09:47:33.268144 sshd[4251]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:33.272778 systemd-logind[1732]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:47:33.273396 systemd[1]: sshd@6-172.31.20.254:22-139.178.89.65:48072.service: Deactivated successfully. Feb 9 09:47:33.275151 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:47:33.277238 systemd-logind[1732]: Removed session 7. Feb 9 09:47:38.301094 systemd[1]: Started sshd@7-172.31.20.254:22-139.178.89.65:60362.service. Feb 9 09:47:38.475014 sshd[4265]: Accepted publickey for core from 139.178.89.65 port 60362 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:38.478181 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:38.487177 systemd-logind[1732]: New session 8 of user core. Feb 9 09:47:38.487377 systemd[1]: Started session-8.scope. Feb 9 09:47:38.746492 sshd[4265]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:38.751571 systemd-logind[1732]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:47:38.752734 systemd[1]: sshd@7-172.31.20.254:22-139.178.89.65:60362.service: Deactivated successfully. Feb 9 09:47:38.754110 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:47:38.755915 systemd-logind[1732]: Removed session 8. Feb 9 09:47:43.774484 systemd[1]: Started sshd@8-172.31.20.254:22-139.178.89.65:60368.service. Feb 9 09:47:43.940553 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 60368 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:43.943636 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:43.952056 systemd[1]: Started session-9.scope. Feb 9 09:47:43.954084 systemd-logind[1732]: New session 9 of user core. Feb 9 09:47:44.203066 sshd[4282]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:44.207723 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:47:44.209122 systemd-logind[1732]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:47:44.209561 systemd[1]: sshd@8-172.31.20.254:22-139.178.89.65:60368.service: Deactivated successfully. Feb 9 09:47:44.211986 systemd-logind[1732]: Removed session 9. Feb 9 09:47:49.231657 systemd[1]: Started sshd@9-172.31.20.254:22-139.178.89.65:52996.service. Feb 9 09:47:49.400176 sshd[4295]: Accepted publickey for core from 139.178.89.65 port 52996 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:49.402689 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:49.410449 systemd-logind[1732]: New session 10 of user core. Feb 9 09:47:49.411551 systemd[1]: Started session-10.scope. Feb 9 09:47:49.662750 sshd[4295]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:49.666915 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:47:49.668200 systemd-logind[1732]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:47:49.668664 systemd[1]: sshd@9-172.31.20.254:22-139.178.89.65:52996.service: Deactivated successfully. Feb 9 09:47:49.671260 systemd-logind[1732]: Removed session 10. Feb 9 09:47:49.692258 systemd[1]: Started sshd@10-172.31.20.254:22-139.178.89.65:53008.service. Feb 9 09:47:49.865292 sshd[4307]: Accepted publickey for core from 139.178.89.65 port 53008 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:49.868321 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:49.876028 systemd-logind[1732]: New session 11 of user core. Feb 9 09:47:49.876958 systemd[1]: Started session-11.scope. Feb 9 09:47:51.406254 sshd[4307]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:51.412749 systemd-logind[1732]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:47:51.413123 systemd[1]: sshd@10-172.31.20.254:22-139.178.89.65:53008.service: Deactivated successfully. Feb 9 09:47:51.414453 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:47:51.417957 systemd-logind[1732]: Removed session 11. Feb 9 09:47:51.436184 systemd[1]: Started sshd@11-172.31.20.254:22-139.178.89.65:53016.service. Feb 9 09:47:51.614424 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 53016 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:51.615632 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:51.623530 systemd-logind[1732]: New session 12 of user core. Feb 9 09:47:51.624845 systemd[1]: Started session-12.scope. Feb 9 09:47:51.882272 sshd[4317]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:51.888289 systemd[1]: sshd@11-172.31.20.254:22-139.178.89.65:53016.service: Deactivated successfully. Feb 9 09:47:51.889610 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:47:51.890028 systemd-logind[1732]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:47:51.893167 systemd-logind[1732]: Removed session 12. Feb 9 09:47:56.910418 systemd[1]: Started sshd@12-172.31.20.254:22-139.178.89.65:53018.service. Feb 9 09:47:57.082872 sshd[4331]: Accepted publickey for core from 139.178.89.65 port 53018 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:57.085940 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:57.095114 systemd[1]: Started session-13.scope. Feb 9 09:47:57.095446 systemd-logind[1732]: New session 13 of user core. Feb 9 09:47:57.349732 sshd[4331]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:57.354501 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:47:57.355946 systemd-logind[1732]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:47:57.356280 systemd[1]: sshd@12-172.31.20.254:22-139.178.89.65:53018.service: Deactivated successfully. Feb 9 09:47:57.358917 systemd-logind[1732]: Removed session 13. Feb 9 09:48:02.378050 systemd[1]: Started sshd@13-172.31.20.254:22-139.178.89.65:39014.service. Feb 9 09:48:02.549176 sshd[4344]: Accepted publickey for core from 139.178.89.65 port 39014 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:02.552308 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:02.560951 systemd[1]: Started session-14.scope. Feb 9 09:48:02.561960 systemd-logind[1732]: New session 14 of user core. Feb 9 09:48:02.816732 sshd[4344]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:02.821571 systemd[1]: sshd@13-172.31.20.254:22-139.178.89.65:39014.service: Deactivated successfully. Feb 9 09:48:02.823013 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:48:02.824502 systemd-logind[1732]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:48:02.826830 systemd-logind[1732]: Removed session 14. Feb 9 09:48:07.844713 systemd[1]: Started sshd@14-172.31.20.254:22-139.178.89.65:39028.service. Feb 9 09:48:08.013142 sshd[4356]: Accepted publickey for core from 139.178.89.65 port 39028 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:08.016137 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:08.023969 systemd-logind[1732]: New session 15 of user core. Feb 9 09:48:08.025079 systemd[1]: Started session-15.scope. Feb 9 09:48:08.285490 sshd[4356]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:08.291180 systemd[1]: sshd@14-172.31.20.254:22-139.178.89.65:39028.service: Deactivated successfully. Feb 9 09:48:08.292617 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:48:08.293820 systemd-logind[1732]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:48:08.296202 systemd-logind[1732]: Removed session 15. Feb 9 09:48:08.316068 systemd[1]: Started sshd@15-172.31.20.254:22-139.178.89.65:38948.service. Feb 9 09:48:08.501081 sshd[4368]: Accepted publickey for core from 139.178.89.65 port 38948 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:08.504143 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:08.512500 systemd-logind[1732]: New session 16 of user core. Feb 9 09:48:08.513002 systemd[1]: Started session-16.scope. Feb 9 09:48:08.826641 sshd[4368]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:08.831269 systemd-logind[1732]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:48:08.831892 systemd[1]: sshd@15-172.31.20.254:22-139.178.89.65:38948.service: Deactivated successfully. Feb 9 09:48:08.833261 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:48:08.835290 systemd-logind[1732]: Removed session 16. Feb 9 09:48:08.855525 systemd[1]: Started sshd@16-172.31.20.254:22-139.178.89.65:38956.service. Feb 9 09:48:09.029463 sshd[4377]: Accepted publickey for core from 139.178.89.65 port 38956 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:09.032461 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:09.040056 systemd-logind[1732]: New session 17 of user core. Feb 9 09:48:09.041129 systemd[1]: Started session-17.scope. Feb 9 09:48:10.508961 sshd[4377]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:10.515924 systemd[1]: sshd@16-172.31.20.254:22-139.178.89.65:38956.service: Deactivated successfully. Feb 9 09:48:10.517891 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:48:10.520047 systemd-logind[1732]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:48:10.522157 systemd-logind[1732]: Removed session 17. Feb 9 09:48:10.552791 systemd[1]: Started sshd@17-172.31.20.254:22-139.178.89.65:38962.service. Feb 9 09:48:10.724078 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 38962 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:10.725318 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:10.734590 systemd[1]: Started session-18.scope. Feb 9 09:48:10.735240 systemd-logind[1732]: New session 18 of user core. Feb 9 09:48:11.421460 sshd[4393]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:11.426592 systemd-logind[1732]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:48:11.427005 systemd[1]: sshd@17-172.31.20.254:22-139.178.89.65:38962.service: Deactivated successfully. Feb 9 09:48:11.428392 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:48:11.430461 systemd-logind[1732]: Removed session 18. Feb 9 09:48:11.452237 systemd[1]: Started sshd@18-172.31.20.254:22-139.178.89.65:38966.service. Feb 9 09:48:11.625669 sshd[4404]: Accepted publickey for core from 139.178.89.65 port 38966 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:11.628182 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:11.637143 systemd[1]: Started session-19.scope. Feb 9 09:48:11.637934 systemd-logind[1732]: New session 19 of user core. Feb 9 09:48:11.883865 sshd[4404]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:11.889019 systemd[1]: sshd@18-172.31.20.254:22-139.178.89.65:38966.service: Deactivated successfully. Feb 9 09:48:11.890820 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:48:11.892336 systemd-logind[1732]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:48:11.894625 systemd-logind[1732]: Removed session 19. Feb 9 09:48:16.913897 systemd[1]: Started sshd@19-172.31.20.254:22-139.178.89.65:38968.service. Feb 9 09:48:17.085150 sshd[4416]: Accepted publickey for core from 139.178.89.65 port 38968 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:17.088398 sshd[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:17.097241 systemd[1]: Started session-20.scope. Feb 9 09:48:17.098482 systemd-logind[1732]: New session 20 of user core. Feb 9 09:48:17.348012 sshd[4416]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:17.353104 systemd[1]: sshd@19-172.31.20.254:22-139.178.89.65:38968.service: Deactivated successfully. Feb 9 09:48:17.354529 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:48:17.355831 systemd-logind[1732]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:48:17.357272 systemd-logind[1732]: Removed session 20. Feb 9 09:48:22.378393 systemd[1]: Started sshd@20-172.31.20.254:22-139.178.89.65:48080.service. Feb 9 09:48:22.549027 sshd[4430]: Accepted publickey for core from 139.178.89.65 port 48080 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:22.552157 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:22.561177 systemd[1]: Started session-21.scope. Feb 9 09:48:22.562420 systemd-logind[1732]: New session 21 of user core. Feb 9 09:48:22.809116 sshd[4430]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:22.814090 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:48:22.815294 systemd[1]: sshd@20-172.31.20.254:22-139.178.89.65:48080.service: Deactivated successfully. Feb 9 09:48:22.817251 systemd-logind[1732]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:48:22.819073 systemd-logind[1732]: Removed session 21. Feb 9 09:48:27.839077 systemd[1]: Started sshd@21-172.31.20.254:22-139.178.89.65:48084.service. Feb 9 09:48:28.015030 sshd[4444]: Accepted publickey for core from 139.178.89.65 port 48084 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:28.018142 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:28.026860 systemd[1]: Started session-22.scope. Feb 9 09:48:28.027674 systemd-logind[1732]: New session 22 of user core. Feb 9 09:48:28.272894 sshd[4444]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:28.277817 systemd-logind[1732]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:48:28.278238 systemd[1]: sshd@21-172.31.20.254:22-139.178.89.65:48084.service: Deactivated successfully. Feb 9 09:48:28.279669 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:48:28.281832 systemd-logind[1732]: Removed session 22. Feb 9 09:48:33.305869 systemd[1]: Started sshd@22-172.31.20.254:22-139.178.89.65:39780.service. Feb 9 09:48:33.485996 sshd[4456]: Accepted publickey for core from 139.178.89.65 port 39780 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:33.489070 sshd[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:33.497500 systemd-logind[1732]: New session 23 of user core. Feb 9 09:48:33.497676 systemd[1]: Started session-23.scope. Feb 9 09:48:33.746788 sshd[4456]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:33.751648 systemd-logind[1732]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:48:33.752277 systemd[1]: sshd@22-172.31.20.254:22-139.178.89.65:39780.service: Deactivated successfully. Feb 9 09:48:33.753669 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:48:33.755600 systemd-logind[1732]: Removed session 23. Feb 9 09:48:33.776942 systemd[1]: Started sshd@23-172.31.20.254:22-139.178.89.65:39784.service. Feb 9 09:48:33.947055 sshd[4467]: Accepted publickey for core from 139.178.89.65 port 39784 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:33.950100 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:33.958613 systemd[1]: Started session-24.scope. Feb 9 09:48:33.960911 systemd-logind[1732]: New session 24 of user core. Feb 9 09:48:37.647819 env[1749]: time="2024-02-09T09:48:37.647304996Z" level=info msg="StopContainer for \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\" with timeout 30 (s)" Feb 9 09:48:37.648843 env[1749]: time="2024-02-09T09:48:37.648795382Z" level=info msg="Stop container \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\" with signal terminated" Feb 9 09:48:37.685834 systemd[1]: cri-containerd-723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785.scope: Deactivated successfully. Feb 9 09:48:37.689026 env[1749]: time="2024-02-09T09:48:37.688886438Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:48:37.700208 env[1749]: time="2024-02-09T09:48:37.700156453Z" level=info msg="StopContainer for \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\" with timeout 1 (s)" Feb 9 09:48:37.701087 env[1749]: time="2024-02-09T09:48:37.701040356Z" level=info msg="Stop container \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\" with signal terminated" Feb 9 09:48:37.718316 systemd-networkd[1548]: lxc_health: Link DOWN Feb 9 09:48:37.718329 systemd-networkd[1548]: lxc_health: Lost carrier Feb 9 09:48:37.748892 systemd[1]: cri-containerd-9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7.scope: Deactivated successfully. Feb 9 09:48:37.749462 systemd[1]: cri-containerd-9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7.scope: Consumed 14.439s CPU time. Feb 9 09:48:37.761875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785-rootfs.mount: Deactivated successfully. Feb 9 09:48:37.786450 env[1749]: time="2024-02-09T09:48:37.786319294Z" level=info msg="shim disconnected" id=723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785 Feb 9 09:48:37.786450 env[1749]: time="2024-02-09T09:48:37.786435361Z" level=warning msg="cleaning up after shim disconnected" id=723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785 namespace=k8s.io Feb 9 09:48:37.786818 env[1749]: time="2024-02-09T09:48:37.786458341Z" level=info msg="cleaning up dead shim" Feb 9 09:48:37.803652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7-rootfs.mount: Deactivated successfully. Feb 9 09:48:37.812894 env[1749]: time="2024-02-09T09:48:37.812814300Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4535 runtime=io.containerd.runc.v2\n" Feb 9 09:48:37.817956 env[1749]: time="2024-02-09T09:48:37.817894733Z" level=info msg="StopContainer for \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\" returns successfully" Feb 9 09:48:37.819094 env[1749]: time="2024-02-09T09:48:37.819011358Z" level=info msg="StopPodSandbox for \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\"" Feb 9 09:48:37.819584 env[1749]: time="2024-02-09T09:48:37.819523721Z" level=info msg="Container to stop \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:37.822989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd-shm.mount: Deactivated successfully. Feb 9 09:48:37.826112 env[1749]: time="2024-02-09T09:48:37.826050258Z" level=info msg="shim disconnected" id=9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7 Feb 9 09:48:37.826471 env[1749]: time="2024-02-09T09:48:37.826425903Z" level=warning msg="cleaning up after shim disconnected" id=9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7 namespace=k8s.io Feb 9 09:48:37.826672 env[1749]: time="2024-02-09T09:48:37.826621027Z" level=info msg="cleaning up dead shim" Feb 9 09:48:37.841860 systemd[1]: cri-containerd-bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd.scope: Deactivated successfully. Feb 9 09:48:37.852495 env[1749]: time="2024-02-09T09:48:37.852436542Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4552 runtime=io.containerd.runc.v2\n" Feb 9 09:48:37.856046 env[1749]: time="2024-02-09T09:48:37.855984925Z" level=info msg="StopContainer for \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\" returns successfully" Feb 9 09:48:37.857158 env[1749]: time="2024-02-09T09:48:37.857090737Z" level=info msg="StopPodSandbox for \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\"" Feb 9 09:48:37.857320 env[1749]: time="2024-02-09T09:48:37.857199148Z" level=info msg="Container to stop \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:37.857320 env[1749]: time="2024-02-09T09:48:37.857231344Z" level=info msg="Container to stop \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:37.857320 env[1749]: time="2024-02-09T09:48:37.857260637Z" level=info msg="Container to stop \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:37.857320 env[1749]: time="2024-02-09T09:48:37.857288838Z" level=info msg="Container to stop \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:37.857739 env[1749]: time="2024-02-09T09:48:37.857316294Z" level=info msg="Container to stop \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:37.873123 systemd[1]: cri-containerd-dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7.scope: Deactivated successfully. Feb 9 09:48:37.912367 env[1749]: time="2024-02-09T09:48:37.910397244Z" level=info msg="shim disconnected" id=bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd Feb 9 09:48:37.913634 env[1749]: time="2024-02-09T09:48:37.913586422Z" level=warning msg="cleaning up after shim disconnected" id=bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd namespace=k8s.io Feb 9 09:48:37.913809 env[1749]: time="2024-02-09T09:48:37.913780527Z" level=info msg="cleaning up dead shim" Feb 9 09:48:37.932216 env[1749]: time="2024-02-09T09:48:37.931866345Z" level=info msg="shim disconnected" id=dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7 Feb 9 09:48:37.932788 env[1749]: time="2024-02-09T09:48:37.932744873Z" level=warning msg="cleaning up after shim disconnected" id=dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7 namespace=k8s.io Feb 9 09:48:37.933965 env[1749]: time="2024-02-09T09:48:37.933674546Z" level=info msg="cleaning up dead shim" Feb 9 09:48:37.944161 env[1749]: time="2024-02-09T09:48:37.944102490Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4605 runtime=io.containerd.runc.v2\n" Feb 9 09:48:37.944973 env[1749]: time="2024-02-09T09:48:37.944929008Z" level=info msg="TearDown network for sandbox \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" successfully" Feb 9 09:48:37.945192 env[1749]: time="2024-02-09T09:48:37.945158345Z" level=info msg="StopPodSandbox for \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" returns successfully" Feb 9 09:48:37.966394 env[1749]: time="2024-02-09T09:48:37.965540151Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4613 runtime=io.containerd.runc.v2\n" Feb 9 09:48:37.966394 env[1749]: time="2024-02-09T09:48:37.966097851Z" level=info msg="TearDown network for sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" successfully" Feb 9 09:48:37.966394 env[1749]: time="2024-02-09T09:48:37.966139240Z" level=info msg="StopPodSandbox for \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" returns successfully" Feb 9 09:48:38.049951 kubelet[2727]: I0209 09:48:38.049915 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-xtables-lock\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.050671 kubelet[2727]: I0209 09:48:38.050616 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-etc-cni-netd\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.050755 kubelet[2727]: I0209 09:48:38.050699 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-lib-modules\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.050824 kubelet[2727]: I0209 09:48:38.050770 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-cgroup\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.050892 kubelet[2727]: I0209 09:48:38.050852 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvmjg\" (UniqueName: \"kubernetes.io/projected/b282a02d-267b-49f6-844c-3a7c201ddc94-kube-api-access-tvmjg\") pod \"b282a02d-267b-49f6-844c-3a7c201ddc94\" (UID: \"b282a02d-267b-49f6-844c-3a7c201ddc94\") " Feb 9 09:48:38.050983 kubelet[2727]: I0209 09:48:38.050924 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-kernel\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.051058 kubelet[2727]: I0209 09:48:38.050996 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hubble-tls\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.051058 kubelet[2727]: I0209 09:48:38.051045 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-config-path\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.051181 kubelet[2727]: I0209 09:48:38.051112 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-net\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.051306 kubelet[2727]: I0209 09:48:38.051267 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-run\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.051421 kubelet[2727]: I0209 09:48:38.050411 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.051512 kubelet[2727]: I0209 09:48:38.051421 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.051512 kubelet[2727]: I0209 09:48:38.051480 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.051638 kubelet[2727]: I0209 09:48:38.051519 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.051840 kubelet[2727]: W0209 09:48:38.051788 2727 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/03aac178-ecdd-49a7-86ff-52f2ae1c5710/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:38.055450 kubelet[2727]: I0209 09:48:38.055417 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-bpf-maps\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.055661 kubelet[2727]: I0209 09:48:38.055638 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cni-path\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.055827 kubelet[2727]: I0209 09:48:38.055805 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03aac178-ecdd-49a7-86ff-52f2ae1c5710-clustermesh-secrets\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.055972 kubelet[2727]: I0209 09:48:38.055949 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hostproc\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.056141 kubelet[2727]: I0209 09:48:38.056118 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6cx6\" (UniqueName: \"kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-kube-api-access-r6cx6\") pod \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\" (UID: \"03aac178-ecdd-49a7-86ff-52f2ae1c5710\") " Feb 9 09:48:38.056302 kubelet[2727]: I0209 09:48:38.056281 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b282a02d-267b-49f6-844c-3a7c201ddc94-cilium-config-path\") pod \"b282a02d-267b-49f6-844c-3a7c201ddc94\" (UID: \"b282a02d-267b-49f6-844c-3a7c201ddc94\") " Feb 9 09:48:38.056573 kubelet[2727]: I0209 09:48:38.056537 2727 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-xtables-lock\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.056711 kubelet[2727]: I0209 09:48:38.056689 2727 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-etc-cni-netd\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.056844 kubelet[2727]: I0209 09:48:38.056824 2727 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-lib-modules\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.057006 kubelet[2727]: I0209 09:48:38.056959 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-cgroup\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.057131 kubelet[2727]: I0209 09:48:38.056610 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:38.057268 kubelet[2727]: I0209 09:48:38.056653 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.057804 kubelet[2727]: I0209 09:48:38.057420 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.059373 kubelet[2727]: I0209 09:48:38.058263 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.059578 kubelet[2727]: I0209 09:48:38.058304 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.059740 kubelet[2727]: I0209 09:48:38.058331 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cni-path" (OuterVolumeSpecName: "cni-path") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.059740 kubelet[2727]: I0209 09:48:38.058710 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hostproc" (OuterVolumeSpecName: "hostproc") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:38.059881 kubelet[2727]: W0209 09:48:38.059204 2727 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b282a02d-267b-49f6-844c-3a7c201ddc94/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:38.065491 kubelet[2727]: I0209 09:48:38.065236 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b282a02d-267b-49f6-844c-3a7c201ddc94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b282a02d-267b-49f6-844c-3a7c201ddc94" (UID: "b282a02d-267b-49f6-844c-3a7c201ddc94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:38.065858 kubelet[2727]: I0209 09:48:38.065807 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b282a02d-267b-49f6-844c-3a7c201ddc94-kube-api-access-tvmjg" (OuterVolumeSpecName: "kube-api-access-tvmjg") pod "b282a02d-267b-49f6-844c-3a7c201ddc94" (UID: "b282a02d-267b-49f6-844c-3a7c201ddc94"). InnerVolumeSpecName "kube-api-access-tvmjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:38.070131 kubelet[2727]: I0209 09:48:38.069960 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:38.071502 kubelet[2727]: I0209 09:48:38.071436 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03aac178-ecdd-49a7-86ff-52f2ae1c5710-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:38.076332 kubelet[2727]: I0209 09:48:38.076282 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-kube-api-access-r6cx6" (OuterVolumeSpecName: "kube-api-access-r6cx6") pod "03aac178-ecdd-49a7-86ff-52f2ae1c5710" (UID: "03aac178-ecdd-49a7-86ff-52f2ae1c5710"). InnerVolumeSpecName "kube-api-access-r6cx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:38.157909 kubelet[2727]: I0209 09:48:38.157871 2727 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tvmjg\" (UniqueName: \"kubernetes.io/projected/b282a02d-267b-49f6-844c-3a7c201ddc94-kube-api-access-tvmjg\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.158188 kubelet[2727]: I0209 09:48:38.158164 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-kernel\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.158330 kubelet[2727]: I0209 09:48:38.158308 2727 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hubble-tls\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.158508 kubelet[2727]: I0209 09:48:38.158486 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-config-path\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.158633 kubelet[2727]: I0209 09:48:38.158612 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-host-proc-sys-net\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.158776 kubelet[2727]: I0209 09:48:38.158756 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cilium-run\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.158907 kubelet[2727]: I0209 09:48:38.158887 2727 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03aac178-ecdd-49a7-86ff-52f2ae1c5710-clustermesh-secrets\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.159044 kubelet[2727]: I0209 09:48:38.159023 2727 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-hostproc\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.159169 kubelet[2727]: I0209 09:48:38.159149 2727 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r6cx6\" (UniqueName: \"kubernetes.io/projected/03aac178-ecdd-49a7-86ff-52f2ae1c5710-kube-api-access-r6cx6\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.159294 kubelet[2727]: I0209 09:48:38.159273 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b282a02d-267b-49f6-844c-3a7c201ddc94-cilium-config-path\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.159436 kubelet[2727]: I0209 09:48:38.159417 2727 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-bpf-maps\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.159567 kubelet[2727]: I0209 09:48:38.159548 2727 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03aac178-ecdd-49a7-86ff-52f2ae1c5710-cni-path\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:38.343159 systemd[1]: Removed slice kubepods-burstable-pod03aac178_ecdd_49a7_86ff_52f2ae1c5710.slice. Feb 9 09:48:38.343472 systemd[1]: kubepods-burstable-pod03aac178_ecdd_49a7_86ff_52f2ae1c5710.slice: Consumed 14.662s CPU time. Feb 9 09:48:38.351303 systemd[1]: Removed slice kubepods-besteffort-podb282a02d_267b_49f6_844c_3a7c201ddc94.slice. Feb 9 09:48:38.627682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7-rootfs.mount: Deactivated successfully. Feb 9 09:48:38.627849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7-shm.mount: Deactivated successfully. Feb 9 09:48:38.627987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd-rootfs.mount: Deactivated successfully. Feb 9 09:48:38.628148 systemd[1]: var-lib-kubelet-pods-b282a02d\x2d267b\x2d49f6\x2d844c\x2d3a7c201ddc94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvmjg.mount: Deactivated successfully. Feb 9 09:48:38.628297 systemd[1]: var-lib-kubelet-pods-03aac178\x2decdd\x2d49a7\x2d86ff\x2d52f2ae1c5710-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr6cx6.mount: Deactivated successfully. Feb 9 09:48:38.628526 systemd[1]: var-lib-kubelet-pods-03aac178\x2decdd\x2d49a7\x2d86ff\x2d52f2ae1c5710-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:48:38.628754 systemd[1]: var-lib-kubelet-pods-03aac178\x2decdd\x2d49a7\x2d86ff\x2d52f2ae1c5710-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:38.802950 kubelet[2727]: I0209 09:48:38.802915 2727 scope.go:115] "RemoveContainer" containerID="9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7" Feb 9 09:48:38.815459 env[1749]: time="2024-02-09T09:48:38.814525234Z" level=info msg="RemoveContainer for \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\"" Feb 9 09:48:38.830385 env[1749]: time="2024-02-09T09:48:38.829106088Z" level=info msg="RemoveContainer for \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\" returns successfully" Feb 9 09:48:38.831784 kubelet[2727]: I0209 09:48:38.831745 2727 scope.go:115] "RemoveContainer" containerID="da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b" Feb 9 09:48:38.834696 env[1749]: time="2024-02-09T09:48:38.834156809Z" level=info msg="RemoveContainer for \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\"" Feb 9 09:48:38.841582 env[1749]: time="2024-02-09T09:48:38.841510741Z" level=info msg="RemoveContainer for \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\" returns successfully" Feb 9 09:48:38.842060 kubelet[2727]: I0209 09:48:38.842031 2727 scope.go:115] "RemoveContainer" containerID="216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66" Feb 9 09:48:38.845274 env[1749]: time="2024-02-09T09:48:38.845218356Z" level=info msg="RemoveContainer for \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\"" Feb 9 09:48:38.850677 env[1749]: time="2024-02-09T09:48:38.850538087Z" level=info msg="RemoveContainer for \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\" returns successfully" Feb 9 09:48:38.852410 kubelet[2727]: I0209 09:48:38.852324 2727 scope.go:115] "RemoveContainer" containerID="a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261" Feb 9 09:48:38.856628 env[1749]: time="2024-02-09T09:48:38.856544557Z" level=info msg="RemoveContainer for \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\"" Feb 9 09:48:38.861547 env[1749]: time="2024-02-09T09:48:38.861476164Z" level=info msg="RemoveContainer for \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\" returns successfully" Feb 9 09:48:38.861964 kubelet[2727]: I0209 09:48:38.861921 2727 scope.go:115] "RemoveContainer" containerID="e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5" Feb 9 09:48:38.864763 env[1749]: time="2024-02-09T09:48:38.864695020Z" level=info msg="RemoveContainer for \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\"" Feb 9 09:48:38.869881 env[1749]: time="2024-02-09T09:48:38.869803018Z" level=info msg="RemoveContainer for \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\" returns successfully" Feb 9 09:48:38.870276 kubelet[2727]: I0209 09:48:38.870219 2727 scope.go:115] "RemoveContainer" containerID="9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7" Feb 9 09:48:38.870768 env[1749]: time="2024-02-09T09:48:38.870651053Z" level=error msg="ContainerStatus for \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\": not found" Feb 9 09:48:38.871462 kubelet[2727]: E0209 09:48:38.871331 2727 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\": not found" containerID="9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7" Feb 9 09:48:38.871613 kubelet[2727]: I0209 09:48:38.871497 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7} err="failed to get container status \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a3d5185836437b4a305246f2e322bf9e72ab3dbbecf916a470149763c4929d7\": not found" Feb 9 09:48:38.871613 kubelet[2727]: I0209 09:48:38.871551 2727 scope.go:115] "RemoveContainer" containerID="da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b" Feb 9 09:48:38.872058 env[1749]: time="2024-02-09T09:48:38.871950418Z" level=error msg="ContainerStatus for \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\": not found" Feb 9 09:48:38.872367 kubelet[2727]: E0209 09:48:38.872306 2727 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\": not found" containerID="da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b" Feb 9 09:48:38.872467 kubelet[2727]: I0209 09:48:38.872436 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b} err="failed to get container status \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\": rpc error: code = NotFound desc = an error occurred when try to find container \"da02cd9f681f5a5b23f9f3cfc9b9c407be6522bb43d4ad21e50598ac4752878b\": not found" Feb 9 09:48:38.872467 kubelet[2727]: I0209 09:48:38.872461 2727 scope.go:115] "RemoveContainer" containerID="216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66" Feb 9 09:48:38.872895 env[1749]: time="2024-02-09T09:48:38.872811341Z" level=error msg="ContainerStatus for \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\": not found" Feb 9 09:48:38.873168 kubelet[2727]: E0209 09:48:38.873142 2727 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\": not found" containerID="216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66" Feb 9 09:48:38.873314 kubelet[2727]: I0209 09:48:38.873293 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66} err="failed to get container status \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\": rpc error: code = NotFound desc = an error occurred when try to find container \"216bfd7e3e2116656e9f4e3e511144d3a3e1fd6c648cb8a5ada0ddf62a224c66\": not found" Feb 9 09:48:38.873471 kubelet[2727]: I0209 09:48:38.873449 2727 scope.go:115] "RemoveContainer" containerID="a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261" Feb 9 09:48:38.873939 env[1749]: time="2024-02-09T09:48:38.873854861Z" level=error msg="ContainerStatus for \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\": not found" Feb 9 09:48:38.874188 kubelet[2727]: E0209 09:48:38.874157 2727 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\": not found" containerID="a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261" Feb 9 09:48:38.874287 kubelet[2727]: I0209 09:48:38.874233 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261} err="failed to get container status \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2b6334678e8720ba76860e6cb79b0965dc9ffe5e66a69417cf0aae53559f261\": not found" Feb 9 09:48:38.874287 kubelet[2727]: I0209 09:48:38.874257 2727 scope.go:115] "RemoveContainer" containerID="e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5" Feb 9 09:48:38.874707 env[1749]: time="2024-02-09T09:48:38.874628242Z" level=error msg="ContainerStatus for \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\": not found" Feb 9 09:48:38.874987 kubelet[2727]: E0209 09:48:38.874962 2727 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\": not found" containerID="e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5" Feb 9 09:48:38.875141 kubelet[2727]: I0209 09:48:38.875121 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5} err="failed to get container status \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6f69a7cec7bac62e2c3385873307bd356701fbbe4283f6ab04cbc03ede01ce5\": not found" Feb 9 09:48:38.875273 kubelet[2727]: I0209 09:48:38.875252 2727 scope.go:115] "RemoveContainer" containerID="723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785" Feb 9 09:48:38.877568 env[1749]: time="2024-02-09T09:48:38.877489550Z" level=info msg="RemoveContainer for \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\"" Feb 9 09:48:38.884592 env[1749]: time="2024-02-09T09:48:38.882148954Z" level=info msg="RemoveContainer for \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\" returns successfully" Feb 9 09:48:38.885289 kubelet[2727]: I0209 09:48:38.885254 2727 scope.go:115] "RemoveContainer" containerID="723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785" Feb 9 09:48:38.886225 env[1749]: time="2024-02-09T09:48:38.886116543Z" level=error msg="ContainerStatus for \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\": not found" Feb 9 09:48:38.886657 kubelet[2727]: E0209 09:48:38.886628 2727 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\": not found" containerID="723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785" Feb 9 09:48:38.886947 kubelet[2727]: I0209 09:48:38.886927 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785} err="failed to get container status \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\": rpc error: code = NotFound desc = an error occurred when try to find container \"723291948f967ff47064d7ddbf369407aa68e39863e1d1a91af9adf041bdf785\": not found" Feb 9 09:48:39.567741 sshd[4467]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:39.573588 systemd-logind[1732]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:48:39.575890 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:48:39.576261 systemd[1]: session-24.scope: Consumed 2.877s CPU time. Feb 9 09:48:39.577941 systemd[1]: sshd@23-172.31.20.254:22-139.178.89.65:39784.service: Deactivated successfully. Feb 9 09:48:39.580536 systemd-logind[1732]: Removed session 24. Feb 9 09:48:39.598928 systemd[1]: Started sshd@24-172.31.20.254:22-139.178.89.65:50650.service. Feb 9 09:48:39.771870 sshd[4637]: Accepted publickey for core from 139.178.89.65 port 50650 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:39.774969 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:39.784043 systemd[1]: Started session-25.scope. Feb 9 09:48:39.785466 systemd-logind[1732]: New session 25 of user core. Feb 9 09:48:40.272364 env[1749]: time="2024-02-09T09:48:40.272290661Z" level=info msg="StopPodSandbox for \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\"" Feb 9 09:48:40.273145 env[1749]: time="2024-02-09T09:48:40.273049031Z" level=info msg="TearDown network for sandbox \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" successfully" Feb 9 09:48:40.273316 env[1749]: time="2024-02-09T09:48:40.273283012Z" level=info msg="StopPodSandbox for \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" returns successfully" Feb 9 09:48:40.274252 env[1749]: time="2024-02-09T09:48:40.274194756Z" level=info msg="RemovePodSandbox for \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\"" Feb 9 09:48:40.274391 env[1749]: time="2024-02-09T09:48:40.274249706Z" level=info msg="Forcibly stopping sandbox \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\"" Feb 9 09:48:40.274460 env[1749]: time="2024-02-09T09:48:40.274397717Z" level=info msg="TearDown network for sandbox \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" successfully" Feb 9 09:48:40.279226 env[1749]: time="2024-02-09T09:48:40.279154648Z" level=info msg="RemovePodSandbox \"bf087b54697253ce72a881c9bd7abc5ea55e5f99f775d973921283d0b43663cd\" returns successfully" Feb 9 09:48:40.280248 env[1749]: time="2024-02-09T09:48:40.280195684Z" level=info msg="StopPodSandbox for \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\"" Feb 9 09:48:40.280427 env[1749]: time="2024-02-09T09:48:40.280332367Z" level=info msg="TearDown network for sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" successfully" Feb 9 09:48:40.280535 env[1749]: time="2024-02-09T09:48:40.280422537Z" level=info msg="StopPodSandbox for \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" returns successfully" Feb 9 09:48:40.281191 env[1749]: time="2024-02-09T09:48:40.281140237Z" level=info msg="RemovePodSandbox for \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\"" Feb 9 09:48:40.281514 env[1749]: time="2024-02-09T09:48:40.281435276Z" level=info msg="Forcibly stopping sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\"" Feb 9 09:48:40.281806 env[1749]: time="2024-02-09T09:48:40.281772471Z" level=info msg="TearDown network for sandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" successfully" Feb 9 09:48:40.287439 env[1749]: time="2024-02-09T09:48:40.287381458Z" level=info msg="RemovePodSandbox \"dd29770a517d6ee69be0ccbed8bff5eecc115c9a46dbe5265c427ec5f4ca66e7\" returns successfully" Feb 9 09:48:40.334779 kubelet[2727]: I0209 09:48:40.334732 2727 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=03aac178-ecdd-49a7-86ff-52f2ae1c5710 path="/var/lib/kubelet/pods/03aac178-ecdd-49a7-86ff-52f2ae1c5710/volumes" Feb 9 09:48:40.336310 kubelet[2727]: I0209 09:48:40.336266 2727 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b282a02d-267b-49f6-844c-3a7c201ddc94 path="/var/lib/kubelet/pods/b282a02d-267b-49f6-844c-3a7c201ddc94/volumes" Feb 9 09:48:40.627794 kubelet[2727]: E0209 09:48:40.627753 2727 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:41.440289 sshd[4637]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:41.447009 systemd-logind[1732]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:48:41.447465 systemd[1]: sshd@24-172.31.20.254:22-139.178.89.65:50650.service: Deactivated successfully. Feb 9 09:48:41.448772 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:48:41.449106 systemd[1]: session-25.scope: Consumed 1.440s CPU time. Feb 9 09:48:41.450619 systemd-logind[1732]: Removed session 25. Feb 9 09:48:41.462057 kubelet[2727]: I0209 09:48:41.462013 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:48:41.462793 kubelet[2727]: E0209 09:48:41.462764 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03aac178-ecdd-49a7-86ff-52f2ae1c5710" containerName="mount-cgroup" Feb 9 09:48:41.464009 kubelet[2727]: E0209 09:48:41.463479 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03aac178-ecdd-49a7-86ff-52f2ae1c5710" containerName="mount-bpf-fs" Feb 9 09:48:41.464009 kubelet[2727]: E0209 09:48:41.463540 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03aac178-ecdd-49a7-86ff-52f2ae1c5710" containerName="clean-cilium-state" Feb 9 09:48:41.464009 kubelet[2727]: E0209 09:48:41.463570 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b282a02d-267b-49f6-844c-3a7c201ddc94" containerName="cilium-operator" Feb 9 09:48:41.464009 kubelet[2727]: E0209 09:48:41.463589 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03aac178-ecdd-49a7-86ff-52f2ae1c5710" containerName="apply-sysctl-overwrites" Feb 9 09:48:41.464009 kubelet[2727]: E0209 09:48:41.463630 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03aac178-ecdd-49a7-86ff-52f2ae1c5710" containerName="cilium-agent" Feb 9 09:48:41.464009 kubelet[2727]: I0209 09:48:41.463716 2727 memory_manager.go:346] "RemoveStaleState removing state" podUID="03aac178-ecdd-49a7-86ff-52f2ae1c5710" containerName="cilium-agent" Feb 9 09:48:41.464009 kubelet[2727]: I0209 09:48:41.463738 2727 memory_manager.go:346] "RemoveStaleState removing state" podUID="b282a02d-267b-49f6-844c-3a7c201ddc94" containerName="cilium-operator" Feb 9 09:48:41.470497 systemd[1]: Started sshd@25-172.31.20.254:22-139.178.89.65:50654.service. Feb 9 09:48:41.518430 systemd[1]: Created slice kubepods-burstable-pod86a057ca_b5bb_46dd_9e8a_455708881fc0.slice. Feb 9 09:48:41.597432 kubelet[2727]: I0209 09:48:41.597378 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-xtables-lock\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.597720 kubelet[2727]: I0209 09:48:41.597692 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-net\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.597938 kubelet[2727]: I0209 09:48:41.597916 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-kernel\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.598209 kubelet[2727]: I0209 09:48:41.598183 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwh29\" (UniqueName: \"kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-kube-api-access-mwh29\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.598487 kubelet[2727]: I0209 09:48:41.598464 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-cgroup\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.598684 kubelet[2727]: I0209 09:48:41.598662 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-hostproc\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.598885 kubelet[2727]: I0209 09:48:41.598863 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-config-path\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.599144 kubelet[2727]: I0209 09:48:41.599120 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-run\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.599372 kubelet[2727]: I0209 09:48:41.599331 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cni-path\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.599631 kubelet[2727]: I0209 09:48:41.599609 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-etc-cni-netd\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.599838 kubelet[2727]: I0209 09:48:41.599817 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-clustermesh-secrets\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.600030 kubelet[2727]: I0209 09:48:41.600011 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-ipsec-secrets\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.600225 kubelet[2727]: I0209 09:48:41.600205 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-bpf-maps\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.600445 kubelet[2727]: I0209 09:48:41.600425 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-lib-modules\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.600652 kubelet[2727]: I0209 09:48:41.600633 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-hubble-tls\") pod \"cilium-vn4dp\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " pod="kube-system/cilium-vn4dp" Feb 9 09:48:41.669162 sshd[4650]: Accepted publickey for core from 139.178.89.65 port 50654 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:41.671739 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:41.680459 systemd-logind[1732]: New session 26 of user core. Feb 9 09:48:41.680995 systemd[1]: Started session-26.scope. Feb 9 09:48:41.825864 env[1749]: time="2024-02-09T09:48:41.825250430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vn4dp,Uid:86a057ca-b5bb-46dd-9e8a-455708881fc0,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:41.855822 env[1749]: time="2024-02-09T09:48:41.855682520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:41.856001 env[1749]: time="2024-02-09T09:48:41.855766618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:41.856001 env[1749]: time="2024-02-09T09:48:41.855825840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:41.856446 env[1749]: time="2024-02-09T09:48:41.856387872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d pid=4671 runtime=io.containerd.runc.v2 Feb 9 09:48:41.887962 systemd[1]: Started cri-containerd-ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d.scope. Feb 9 09:48:41.960016 env[1749]: time="2024-02-09T09:48:41.959955464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vn4dp,Uid:86a057ca-b5bb-46dd-9e8a-455708881fc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d\"" Feb 9 09:48:41.967748 env[1749]: time="2024-02-09T09:48:41.967683184Z" level=info msg="CreateContainer within sandbox \"ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:48:41.997910 env[1749]: time="2024-02-09T09:48:41.997846564Z" level=info msg="CreateContainer within sandbox \"ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\"" Feb 9 09:48:41.999288 env[1749]: time="2024-02-09T09:48:41.999228539Z" level=info msg="StartContainer for \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\"" Feb 9 09:48:42.018538 sshd[4650]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:42.032144 systemd[1]: sshd@25-172.31.20.254:22-139.178.89.65:50654.service: Deactivated successfully. Feb 9 09:48:42.033489 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:48:42.034612 systemd-logind[1732]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:48:42.036588 systemd-logind[1732]: Removed session 26. Feb 9 09:48:42.049523 systemd[1]: Started sshd@26-172.31.20.254:22-139.178.89.65:50656.service. Feb 9 09:48:42.087089 systemd[1]: Started cri-containerd-99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf.scope. Feb 9 09:48:42.117552 systemd[1]: cri-containerd-99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf.scope: Deactivated successfully. Feb 9 09:48:42.145147 env[1749]: time="2024-02-09T09:48:42.144801033Z" level=info msg="shim disconnected" id=99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf Feb 9 09:48:42.145147 env[1749]: time="2024-02-09T09:48:42.144874019Z" level=warning msg="cleaning up after shim disconnected" id=99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf namespace=k8s.io Feb 9 09:48:42.145147 env[1749]: time="2024-02-09T09:48:42.144894671Z" level=info msg="cleaning up dead shim" Feb 9 09:48:42.159569 env[1749]: time="2024-02-09T09:48:42.159466663Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4734 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:48:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:48:42.160507 env[1749]: time="2024-02-09T09:48:42.160270909Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Feb 9 09:48:42.160870 env[1749]: time="2024-02-09T09:48:42.160737804Z" level=error msg="Failed to pipe stdout of container \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\"" error="reading from a closed fifo" Feb 9 09:48:42.161028 env[1749]: time="2024-02-09T09:48:42.160878795Z" level=error msg="Failed to pipe stderr of container \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\"" error="reading from a closed fifo" Feb 9 09:48:42.163189 env[1749]: time="2024-02-09T09:48:42.163084494Z" level=error msg="StartContainer for \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:48:42.163953 kubelet[2727]: E0209 09:48:42.163681 2727 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf" Feb 9 09:48:42.163953 kubelet[2727]: E0209 09:48:42.163842 2727 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:48:42.163953 kubelet[2727]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:48:42.163953 kubelet[2727]: rm /hostbin/cilium-mount Feb 9 09:48:42.166797 kubelet[2727]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwh29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vn4dp_kube-system(86a057ca-b5bb-46dd-9e8a-455708881fc0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:48:42.167594 kubelet[2727]: E0209 09:48:42.163905 2727 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vn4dp" podUID=86a057ca-b5bb-46dd-9e8a-455708881fc0 Feb 9 09:48:42.239340 kubelet[2727]: I0209 09:48:42.239107 2727 setters.go:548] "Node became not ready" node="ip-172-31-20-254" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:48:42.239037547 +0000 UTC m=+122.366025047 LastTransitionTime:2024-02-09 09:48:42.239037547 +0000 UTC m=+122.366025047 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:48:42.251303 sshd[4721]: Accepted publickey for core from 139.178.89.65 port 50656 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:48:42.254217 sshd[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:42.265218 systemd-logind[1732]: New session 27 of user core. Feb 9 09:48:42.267395 systemd[1]: Started session-27.scope. Feb 9 09:48:42.832104 env[1749]: time="2024-02-09T09:48:42.830791576Z" level=info msg="StopPodSandbox for \"ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d\"" Feb 9 09:48:42.832104 env[1749]: time="2024-02-09T09:48:42.830888731Z" level=info msg="Container to stop \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:42.835759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d-shm.mount: Deactivated successfully. Feb 9 09:48:42.857411 systemd[1]: cri-containerd-ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d.scope: Deactivated successfully. Feb 9 09:48:42.904239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d-rootfs.mount: Deactivated successfully. Feb 9 09:48:42.927484 env[1749]: time="2024-02-09T09:48:42.927334987Z" level=info msg="shim disconnected" id=ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d Feb 9 09:48:42.927762 env[1749]: time="2024-02-09T09:48:42.927487366Z" level=warning msg="cleaning up after shim disconnected" id=ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d namespace=k8s.io Feb 9 09:48:42.927762 env[1749]: time="2024-02-09T09:48:42.927510707Z" level=info msg="cleaning up dead shim" Feb 9 09:48:42.951398 env[1749]: time="2024-02-09T09:48:42.951308172Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4772 runtime=io.containerd.runc.v2\n" Feb 9 09:48:42.951997 env[1749]: time="2024-02-09T09:48:42.951934443Z" level=info msg="TearDown network for sandbox \"ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d\" successfully" Feb 9 09:48:42.952106 env[1749]: time="2024-02-09T09:48:42.951988036Z" level=info msg="StopPodSandbox for \"ff4646f7d09686c92e598b0ef0988e00790a4378bea14dad772a4522f7cc626d\" returns successfully" Feb 9 09:48:43.112159 kubelet[2727]: I0209 09:48:43.111991 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-config-path\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113146 kubelet[2727]: I0209 09:48:43.113100 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-run\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113265 kubelet[2727]: I0209 09:48:43.113165 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-bpf-maps\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113265 kubelet[2727]: I0209 09:48:43.113222 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwh29\" (UniqueName: \"kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-kube-api-access-mwh29\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113265 kubelet[2727]: I0209 09:48:43.113261 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cni-path\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113483 kubelet[2727]: I0209 09:48:43.113311 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-clustermesh-secrets\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113483 kubelet[2727]: I0209 09:48:43.113380 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-ipsec-secrets\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113483 kubelet[2727]: I0209 09:48:43.113425 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-net\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113483 kubelet[2727]: I0209 09:48:43.113464 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-hostproc\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113725 kubelet[2727]: I0209 09:48:43.113505 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-kernel\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113725 kubelet[2727]: I0209 09:48:43.113552 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-hubble-tls\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113725 kubelet[2727]: I0209 09:48:43.113591 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-cgroup\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113725 kubelet[2727]: I0209 09:48:43.113628 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-etc-cni-netd\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113725 kubelet[2727]: I0209 09:48:43.113670 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-lib-modules\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.113725 kubelet[2727]: I0209 09:48:43.113710 2727 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-xtables-lock\") pod \"86a057ca-b5bb-46dd-9e8a-455708881fc0\" (UID: \"86a057ca-b5bb-46dd-9e8a-455708881fc0\") " Feb 9 09:48:43.114074 kubelet[2727]: I0209 09:48:43.113783 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.114074 kubelet[2727]: I0209 09:48:43.113835 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.114074 kubelet[2727]: I0209 09:48:43.113873 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.114655 kubelet[2727]: I0209 09:48:43.114606 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cni-path" (OuterVolumeSpecName: "cni-path") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.115447 kubelet[2727]: W0209 09:48:43.115388 2727 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/86a057ca-b5bb-46dd-9e8a-455708881fc0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:43.124975 systemd[1]: var-lib-kubelet-pods-86a057ca\x2db5bb\x2d46dd\x2d9e8a\x2d455708881fc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwh29.mount: Deactivated successfully. Feb 9 09:48:43.129729 systemd[1]: var-lib-kubelet-pods-86a057ca\x2db5bb\x2d46dd\x2d9e8a\x2d455708881fc0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:43.131269 kubelet[2727]: I0209 09:48:43.131101 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:43.131446 kubelet[2727]: I0209 09:48:43.131268 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-kube-api-access-mwh29" (OuterVolumeSpecName: "kube-api-access-mwh29") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "kube-api-access-mwh29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:43.131669 kubelet[2727]: I0209 09:48:43.131613 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.132007 kubelet[2727]: I0209 09:48:43.131880 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.132192 kubelet[2727]: I0209 09:48:43.131921 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.132534 kubelet[2727]: I0209 09:48:43.132501 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.132712 kubelet[2727]: I0209 09:48:43.132686 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.132896 kubelet[2727]: I0209 09:48:43.132858 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-hostproc" (OuterVolumeSpecName: "hostproc") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:43.133951 kubelet[2727]: I0209 09:48:43.133908 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:43.138055 kubelet[2727]: I0209 09:48:43.137990 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:43.139171 kubelet[2727]: I0209 09:48:43.139118 2727 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86a057ca-b5bb-46dd-9e8a-455708881fc0" (UID: "86a057ca-b5bb-46dd-9e8a-455708881fc0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:43.214504 kubelet[2727]: I0209 09:48:43.214467 2727 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mwh29\" (UniqueName: \"kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-kube-api-access-mwh29\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.214746 kubelet[2727]: I0209 09:48:43.214724 2727 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cni-path\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.214876 kubelet[2727]: I0209 09:48:43.214855 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-ipsec-secrets\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215014 kubelet[2727]: I0209 09:48:43.214994 2727 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86a057ca-b5bb-46dd-9e8a-455708881fc0-clustermesh-secrets\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215140 kubelet[2727]: I0209 09:48:43.215120 2727 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-hostproc\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215267 kubelet[2727]: I0209 09:48:43.215248 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-net\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215421 kubelet[2727]: I0209 09:48:43.215401 2727 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-host-proc-sys-kernel\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215552 kubelet[2727]: I0209 09:48:43.215532 2727 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86a057ca-b5bb-46dd-9e8a-455708881fc0-hubble-tls\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215674 kubelet[2727]: I0209 09:48:43.215655 2727 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-etc-cni-netd\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215788 kubelet[2727]: I0209 09:48:43.215769 2727 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-lib-modules\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.215912 kubelet[2727]: I0209 09:48:43.215892 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-cgroup\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.216043 kubelet[2727]: I0209 09:48:43.216024 2727 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-xtables-lock\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.216165 kubelet[2727]: I0209 09:48:43.216146 2727 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-bpf-maps\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.216303 kubelet[2727]: I0209 09:48:43.216283 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-config-path\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.216453 kubelet[2727]: I0209 09:48:43.216433 2727 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86a057ca-b5bb-46dd-9e8a-455708881fc0-cilium-run\") on node \"ip-172-31-20-254\" DevicePath \"\"" Feb 9 09:48:43.720868 systemd[1]: var-lib-kubelet-pods-86a057ca\x2db5bb\x2d46dd\x2d9e8a\x2d455708881fc0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:48:43.721047 systemd[1]: var-lib-kubelet-pods-86a057ca\x2db5bb\x2d46dd\x2d9e8a\x2d455708881fc0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:43.835237 kubelet[2727]: I0209 09:48:43.835180 2727 scope.go:115] "RemoveContainer" containerID="99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf" Feb 9 09:48:43.838332 env[1749]: time="2024-02-09T09:48:43.838248337Z" level=info msg="RemoveContainer for \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\"" Feb 9 09:48:43.843662 env[1749]: time="2024-02-09T09:48:43.843590607Z" level=info msg="RemoveContainer for \"99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf\" returns successfully" Feb 9 09:48:43.849190 systemd[1]: Removed slice kubepods-burstable-pod86a057ca_b5bb_46dd_9e8a_455708881fc0.slice. Feb 9 09:48:43.908073 kubelet[2727]: I0209 09:48:43.908002 2727 topology_manager.go:212] "Topology Admit Handler" Feb 9 09:48:43.908251 kubelet[2727]: E0209 09:48:43.908094 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86a057ca-b5bb-46dd-9e8a-455708881fc0" containerName="mount-cgroup" Feb 9 09:48:43.908251 kubelet[2727]: I0209 09:48:43.908142 2727 memory_manager.go:346] "RemoveStaleState removing state" podUID="86a057ca-b5bb-46dd-9e8a-455708881fc0" containerName="mount-cgroup" Feb 9 09:48:43.918290 systemd[1]: Created slice kubepods-burstable-podec8cf998_d6e0_4695_9e84_e75cf69ed16c.slice. Feb 9 09:48:44.023566 kubelet[2727]: I0209 09:48:44.023428 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-etc-cni-netd\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023566 kubelet[2727]: I0209 09:48:44.023503 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-cni-path\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023566 kubelet[2727]: I0209 09:48:44.023549 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-lib-modules\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023854 kubelet[2727]: I0209 09:48:44.023593 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-hostproc\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023854 kubelet[2727]: I0209 09:48:44.023643 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-cilium-cgroup\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023854 kubelet[2727]: I0209 09:48:44.023689 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-hubble-tls\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023854 kubelet[2727]: I0209 09:48:44.023733 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-cilium-run\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023854 kubelet[2727]: I0209 09:48:44.023784 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-host-proc-sys-net\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.023854 kubelet[2727]: I0209 09:48:44.023830 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-host-proc-sys-kernel\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.024215 kubelet[2727]: I0209 09:48:44.023878 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qncmj\" (UniqueName: \"kubernetes.io/projected/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-kube-api-access-qncmj\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.024215 kubelet[2727]: I0209 09:48:44.023923 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-clustermesh-secrets\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.024215 kubelet[2727]: I0209 09:48:44.023967 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-cilium-config-path\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.024215 kubelet[2727]: I0209 09:48:44.024009 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-bpf-maps\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.024215 kubelet[2727]: I0209 09:48:44.024050 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-xtables-lock\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.024582 kubelet[2727]: I0209 09:48:44.024094 2727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec8cf998-d6e0-4695-9e84-e75cf69ed16c-cilium-ipsec-secrets\") pod \"cilium-gc85w\" (UID: \"ec8cf998-d6e0-4695-9e84-e75cf69ed16c\") " pod="kube-system/cilium-gc85w" Feb 9 09:48:44.223843 env[1749]: time="2024-02-09T09:48:44.223769471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gc85w,Uid:ec8cf998-d6e0-4695-9e84-e75cf69ed16c,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:44.246949 env[1749]: time="2024-02-09T09:48:44.246799620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:44.247132 env[1749]: time="2024-02-09T09:48:44.246927507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:44.247132 env[1749]: time="2024-02-09T09:48:44.246955371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:44.247610 env[1749]: time="2024-02-09T09:48:44.247513768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048 pid=4801 runtime=io.containerd.runc.v2 Feb 9 09:48:44.270162 systemd[1]: Started cri-containerd-58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048.scope. Feb 9 09:48:44.318495 env[1749]: time="2024-02-09T09:48:44.318263684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gc85w,Uid:ec8cf998-d6e0-4695-9e84-e75cf69ed16c,Namespace:kube-system,Attempt:0,} returns sandbox id \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\"" Feb 9 09:48:44.326237 env[1749]: time="2024-02-09T09:48:44.326173906Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:48:44.334000 kubelet[2727]: I0209 09:48:44.333947 2727 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=86a057ca-b5bb-46dd-9e8a-455708881fc0 path="/var/lib/kubelet/pods/86a057ca-b5bb-46dd-9e8a-455708881fc0/volumes" Feb 9 09:48:44.351602 env[1749]: time="2024-02-09T09:48:44.351517551Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373\"" Feb 9 09:48:44.355608 env[1749]: time="2024-02-09T09:48:44.354726293Z" level=info msg="StartContainer for \"0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373\"" Feb 9 09:48:44.389810 systemd[1]: Started cri-containerd-0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373.scope. Feb 9 09:48:44.459427 env[1749]: time="2024-02-09T09:48:44.459330318Z" level=info msg="StartContainer for \"0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373\" returns successfully" Feb 9 09:48:44.476636 systemd[1]: cri-containerd-0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373.scope: Deactivated successfully. Feb 9 09:48:44.538515 env[1749]: time="2024-02-09T09:48:44.533249495Z" level=info msg="shim disconnected" id=0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373 Feb 9 09:48:44.538515 env[1749]: time="2024-02-09T09:48:44.533323272Z" level=warning msg="cleaning up after shim disconnected" id=0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373 namespace=k8s.io Feb 9 09:48:44.538515 env[1749]: time="2024-02-09T09:48:44.533378654Z" level=info msg="cleaning up dead shim" Feb 9 09:48:44.565623 env[1749]: time="2024-02-09T09:48:44.565563440Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4884 runtime=io.containerd.runc.v2\n" Feb 9 09:48:44.845591 env[1749]: time="2024-02-09T09:48:44.845530238Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:48:44.879719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163799811.mount: Deactivated successfully. Feb 9 09:48:44.895999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1936017251.mount: Deactivated successfully. Feb 9 09:48:44.907750 env[1749]: time="2024-02-09T09:48:44.907515837Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0\"" Feb 9 09:48:44.910537 env[1749]: time="2024-02-09T09:48:44.909035288Z" level=info msg="StartContainer for \"bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0\"" Feb 9 09:48:44.950622 systemd[1]: Started cri-containerd-bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0.scope. Feb 9 09:48:45.007694 env[1749]: time="2024-02-09T09:48:45.007629803Z" level=info msg="StartContainer for \"bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0\" returns successfully" Feb 9 09:48:45.021318 systemd[1]: cri-containerd-bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0.scope: Deactivated successfully. Feb 9 09:48:45.069274 env[1749]: time="2024-02-09T09:48:45.069210818Z" level=info msg="shim disconnected" id=bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0 Feb 9 09:48:45.069710 env[1749]: time="2024-02-09T09:48:45.069675937Z" level=warning msg="cleaning up after shim disconnected" id=bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0 namespace=k8s.io Feb 9 09:48:45.069833 env[1749]: time="2024-02-09T09:48:45.069805900Z" level=info msg="cleaning up dead shim" Feb 9 09:48:45.084642 env[1749]: time="2024-02-09T09:48:45.084583448Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4945 runtime=io.containerd.runc.v2\n" Feb 9 09:48:45.253174 kubelet[2727]: W0209 09:48:45.253090 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a057ca_b5bb_46dd_9e8a_455708881fc0.slice/cri-containerd-99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf.scope WatchSource:0}: container "99ce8356ae7c8fa301d21e33e54a1fe6106b7471d4572327e73e62fd203ed7bf" in namespace "k8s.io": not found Feb 9 09:48:45.629411 kubelet[2727]: E0209 09:48:45.629360 2727 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:45.849664 env[1749]: time="2024-02-09T09:48:45.849606439Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:48:45.876173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312797120.mount: Deactivated successfully. Feb 9 09:48:45.894131 env[1749]: time="2024-02-09T09:48:45.893967225Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8\"" Feb 9 09:48:45.895507 env[1749]: time="2024-02-09T09:48:45.895434307Z" level=info msg="StartContainer for \"76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8\"" Feb 9 09:48:45.934451 systemd[1]: Started cri-containerd-76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8.scope. Feb 9 09:48:46.024794 systemd[1]: cri-containerd-76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8.scope: Deactivated successfully. Feb 9 09:48:46.028773 env[1749]: time="2024-02-09T09:48:46.028677939Z" level=info msg="StartContainer for \"76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8\" returns successfully" Feb 9 09:48:46.075205 env[1749]: time="2024-02-09T09:48:46.075142257Z" level=info msg="shim disconnected" id=76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8 Feb 9 09:48:46.075718 env[1749]: time="2024-02-09T09:48:46.075680278Z" level=warning msg="cleaning up after shim disconnected" id=76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8 namespace=k8s.io Feb 9 09:48:46.075847 env[1749]: time="2024-02-09T09:48:46.075819613Z" level=info msg="cleaning up dead shim" Feb 9 09:48:46.092165 env[1749]: time="2024-02-09T09:48:46.092108874Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5003 runtime=io.containerd.runc.v2\n" Feb 9 09:48:46.721270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8-rootfs.mount: Deactivated successfully. Feb 9 09:48:46.863595 env[1749]: time="2024-02-09T09:48:46.863490312Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:48:46.889405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791460052.mount: Deactivated successfully. Feb 9 09:48:46.900624 env[1749]: time="2024-02-09T09:48:46.900559901Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c\"" Feb 9 09:48:46.902102 env[1749]: time="2024-02-09T09:48:46.902049220Z" level=info msg="StartContainer for \"be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c\"" Feb 9 09:48:46.937829 systemd[1]: Started cri-containerd-be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c.scope. Feb 9 09:48:47.019282 systemd[1]: cri-containerd-be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c.scope: Deactivated successfully. Feb 9 09:48:47.023508 env[1749]: time="2024-02-09T09:48:47.023334574Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8cf998_d6e0_4695_9e84_e75cf69ed16c.slice/cri-containerd-be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c.scope/memory.events\": no such file or directory" Feb 9 09:48:47.025682 env[1749]: time="2024-02-09T09:48:47.025621575Z" level=info msg="StartContainer for \"be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c\" returns successfully" Feb 9 09:48:47.072946 env[1749]: time="2024-02-09T09:48:47.072875720Z" level=info msg="shim disconnected" id=be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c Feb 9 09:48:47.073259 env[1749]: time="2024-02-09T09:48:47.072944902Z" level=warning msg="cleaning up after shim disconnected" id=be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c namespace=k8s.io Feb 9 09:48:47.073259 env[1749]: time="2024-02-09T09:48:47.072968530Z" level=info msg="cleaning up dead shim" Feb 9 09:48:47.089229 env[1749]: time="2024-02-09T09:48:47.089163206Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5060 runtime=io.containerd.runc.v2\n" Feb 9 09:48:47.721329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c-rootfs.mount: Deactivated successfully. Feb 9 09:48:47.863121 env[1749]: time="2024-02-09T09:48:47.863056546Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:48:47.903715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750941060.mount: Deactivated successfully. Feb 9 09:48:47.907526 env[1749]: time="2024-02-09T09:48:47.907462532Z" level=info msg="CreateContainer within sandbox \"58faec47c994ad5bb3b7e6ebfd15b20d7d454286c7efb8b74a063e93100d3048\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36\"" Feb 9 09:48:47.910537 env[1749]: time="2024-02-09T09:48:47.909070606Z" level=info msg="StartContainer for \"14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36\"" Feb 9 09:48:47.947857 systemd[1]: Started cri-containerd-14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36.scope. Feb 9 09:48:48.035643 env[1749]: time="2024-02-09T09:48:48.035500935Z" level=info msg="StartContainer for \"14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36\" returns successfully" Feb 9 09:48:48.368852 kubelet[2727]: W0209 09:48:48.368640 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8cf998_d6e0_4695_9e84_e75cf69ed16c.slice/cri-containerd-0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373.scope WatchSource:0}: task 0a5d9b3661ac4cf32cda88c194fab7710bd3c97e2ed59863c7b6354cb47ab373 not found: not found Feb 9 09:48:48.890642 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:48:48.914131 kubelet[2727]: I0209 09:48:48.914085 2727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gc85w" podStartSLOduration=5.914029165 podCreationTimestamp="2024-02-09 09:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:48.911015942 +0000 UTC m=+129.038003454" watchObservedRunningTime="2024-02-09 09:48:48.914029165 +0000 UTC m=+129.041016665" Feb 9 09:48:50.827831 systemd[1]: run-containerd-runc-k8s.io-14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36-runc.YJ6sGg.mount: Deactivated successfully. Feb 9 09:48:51.476830 kubelet[2727]: W0209 09:48:51.476766 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8cf998_d6e0_4695_9e84_e75cf69ed16c.slice/cri-containerd-bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0.scope WatchSource:0}: task bf1e9726cacc3cb6494cfd262157b9066990e99d3364c63461804753c90b3aa0 not found: not found Feb 9 09:48:52.783131 (udev-worker)[5594]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:52.784560 (udev-worker)[5595]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:52.823743 systemd-networkd[1548]: lxc_health: Link UP Feb 9 09:48:52.835405 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:48:52.835286 systemd-networkd[1548]: lxc_health: Gained carrier Feb 9 09:48:53.140510 systemd[1]: run-containerd-runc-k8s.io-14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36-runc.KsLEw6.mount: Deactivated successfully. Feb 9 09:48:53.982174 systemd-networkd[1548]: lxc_health: Gained IPv6LL Feb 9 09:48:54.589089 kubelet[2727]: W0209 09:48:54.589036 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8cf998_d6e0_4695_9e84_e75cf69ed16c.slice/cri-containerd-76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8.scope WatchSource:0}: task 76732e74c624645ba0c2ced3bd185594cb3e2ee97f9d9c516156e47103cb26f8 not found: not found Feb 9 09:48:55.445125 systemd[1]: run-containerd-runc-k8s.io-14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36-runc.2zEsqY.mount: Deactivated successfully. Feb 9 09:48:57.711162 kubelet[2727]: W0209 09:48:57.711112 2727 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8cf998_d6e0_4695_9e84_e75cf69ed16c.slice/cri-containerd-be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c.scope WatchSource:0}: task be4a04b299a85bbf5fca8e368829f4daf7e8328ebf840566a2713291b686153c not found: not found Feb 9 09:48:57.793690 systemd[1]: run-containerd-runc-k8s.io-14dbb842ef69bf7681ee8ea3ba393ff86c139a5b7c0a822e21a8abc4a2b92b36-runc.OUUNiw.mount: Deactivated successfully. Feb 9 09:48:58.000026 sshd[4721]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:58.005142 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 09:48:58.006741 systemd-logind[1732]: Session 27 logged out. Waiting for processes to exit. Feb 9 09:48:58.007002 systemd[1]: sshd@26-172.31.20.254:22-139.178.89.65:50656.service: Deactivated successfully. Feb 9 09:48:58.009767 systemd-logind[1732]: Removed session 27. Feb 9 09:49:13.029815 systemd[1]: cri-containerd-3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0.scope: Deactivated successfully. Feb 9 09:49:13.030425 systemd[1]: cri-containerd-3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0.scope: Consumed 5.328s CPU time. Feb 9 09:49:13.067420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0-rootfs.mount: Deactivated successfully. Feb 9 09:49:13.088696 env[1749]: time="2024-02-09T09:49:13.088630950Z" level=info msg="shim disconnected" id=3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0 Feb 9 09:49:13.089453 env[1749]: time="2024-02-09T09:49:13.089410369Z" level=warning msg="cleaning up after shim disconnected" id=3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0 namespace=k8s.io Feb 9 09:49:13.089537 env[1749]: time="2024-02-09T09:49:13.089450018Z" level=info msg="cleaning up dead shim" Feb 9 09:49:13.103264 env[1749]: time="2024-02-09T09:49:13.103183817Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:49:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5713 runtime=io.containerd.runc.v2\n" Feb 9 09:49:13.650146 kubelet[2727]: E0209 09:49:13.650006 2727 request.go:1092] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 9 09:49:13.650146 kubelet[2727]: E0209 09:49:13.650096 2727 controller.go:193] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Feb 9 09:49:13.933553 kubelet[2727]: I0209 09:49:13.933059 2727 scope.go:115] "RemoveContainer" containerID="3f4c62c8112ea3d4ce362c371c41d03db2d7aea6d9b09f44af762110a77527e0" Feb 9 09:49:13.937501 env[1749]: time="2024-02-09T09:49:13.937432756Z" level=info msg="CreateContainer within sandbox \"17e31fd1b8f031027a508c8b45d79c9908a1e6721b36b278bcae6a49ac0534bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 09:49:13.966084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552910881.mount: Deactivated successfully. Feb 9 09:49:13.973825 env[1749]: time="2024-02-09T09:49:13.973690185Z" level=info msg="CreateContainer within sandbox \"17e31fd1b8f031027a508c8b45d79c9908a1e6721b36b278bcae6a49ac0534bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"89159b7814ded916220be20a2c9cf86383da45db5a7dc5938a90595aa4afd740\"" Feb 9 09:49:13.974832 env[1749]: time="2024-02-09T09:49:13.974787156Z" level=info msg="StartContainer for \"89159b7814ded916220be20a2c9cf86383da45db5a7dc5938a90595aa4afd740\"" Feb 9 09:49:14.013989 systemd[1]: Started cri-containerd-89159b7814ded916220be20a2c9cf86383da45db5a7dc5938a90595aa4afd740.scope. Feb 9 09:49:14.099753 env[1749]: time="2024-02-09T09:49:14.099683885Z" level=info msg="StartContainer for \"89159b7814ded916220be20a2c9cf86383da45db5a7dc5938a90595aa4afd740\" returns successfully" Feb 9 09:49:17.509987 systemd[1]: cri-containerd-477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c.scope: Deactivated successfully. Feb 9 09:49:17.510585 systemd[1]: cri-containerd-477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c.scope: Consumed 5.554s CPU time. Feb 9 09:49:17.549493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c-rootfs.mount: Deactivated successfully. Feb 9 09:49:17.565831 env[1749]: time="2024-02-09T09:49:17.565676653Z" level=info msg="shim disconnected" id=477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c Feb 9 09:49:17.566530 env[1749]: time="2024-02-09T09:49:17.565835537Z" level=warning msg="cleaning up after shim disconnected" id=477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c namespace=k8s.io Feb 9 09:49:17.566530 env[1749]: time="2024-02-09T09:49:17.565859922Z" level=info msg="cleaning up dead shim" Feb 9 09:49:17.580807 env[1749]: time="2024-02-09T09:49:17.580745630Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:49:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5771 runtime=io.containerd.runc.v2\n" Feb 9 09:49:17.948825 kubelet[2727]: I0209 09:49:17.948791 2727 scope.go:115] "RemoveContainer" containerID="477ea3cc44fd7fb00dd029606abeb140ebcb4ae30d94e31e2e75ec1ab328192c" Feb 9 09:49:17.952487 env[1749]: time="2024-02-09T09:49:17.952427442Z" level=info msg="CreateContainer within sandbox \"da4763ef746f4c7bedfd17663bf0c3eef473a889d36ab448a25cea229f7e0d28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 09:49:17.972899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23181447.mount: Deactivated successfully. Feb 9 09:49:17.989527 env[1749]: time="2024-02-09T09:49:17.989462891Z" level=info msg="CreateContainer within sandbox \"da4763ef746f4c7bedfd17663bf0c3eef473a889d36ab448a25cea229f7e0d28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5d92aedcef8405fd5a6d5fbda5caf25567099da67566f5cec34865285f7567f8\"" Feb 9 09:49:17.990684 env[1749]: time="2024-02-09T09:49:17.990621676Z" level=info msg="StartContainer for \"5d92aedcef8405fd5a6d5fbda5caf25567099da67566f5cec34865285f7567f8\"" Feb 9 09:49:18.025274 systemd[1]: Started cri-containerd-5d92aedcef8405fd5a6d5fbda5caf25567099da67566f5cec34865285f7567f8.scope. Feb 9 09:49:18.111149 env[1749]: time="2024-02-09T09:49:18.110999097Z" level=info msg="StartContainer for \"5d92aedcef8405fd5a6d5fbda5caf25567099da67566f5cec34865285f7567f8\" returns successfully"