Feb 9 19:16:29.950362 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 19:16:29.950399 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 19:16:29.950421 kernel: efi: EFI v2.70 by EDK II Feb 9 19:16:29.950437 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 19:16:29.950450 kernel: ACPI: Early table checksum verification disabled Feb 9 19:16:29.950464 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 19:16:29.950479 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 19:16:29.950493 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:16:29.950507 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 19:16:29.950520 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:16:29.950539 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 19:16:29.950574 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 19:16:29.950590 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 19:16:29.950604 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:16:29.950621 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 19:16:29.950642 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 19:16:29.950657 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 19:16:29.950672 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 19:16:29.950686 kernel: printk: bootconsole [uart0] enabled Feb 9 19:16:29.950700 kernel: NUMA: Failed to initialise from firmware Feb 9 19:16:29.950715 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:16:29.950730 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 19:16:29.950744 kernel: Zone ranges: Feb 9 19:16:29.950759 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 19:16:29.950773 kernel: DMA32 empty Feb 9 19:16:29.950787 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 19:16:29.950806 kernel: Movable zone start for each node Feb 9 19:16:29.950844 kernel: Early memory node ranges Feb 9 19:16:29.950861 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 19:16:29.950875 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 19:16:29.950889 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 19:16:29.950904 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 19:16:29.950918 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 19:16:29.950933 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 19:16:29.950948 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:16:29.950962 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 19:16:29.950977 kernel: psci: probing for conduit method from ACPI. Feb 9 19:16:29.950991 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 19:16:29.951011 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 19:16:29.951026 kernel: psci: Trusted OS migration not required Feb 9 19:16:29.951047 kernel: psci: SMC Calling Convention v1.1 Feb 9 19:16:29.951063 kernel: ACPI: SRAT not present Feb 9 19:16:29.951078 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 19:16:29.951098 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 19:16:29.951113 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 19:16:29.951128 kernel: Detected PIPT I-cache on CPU0 Feb 9 19:16:29.951144 kernel: CPU features: detected: GIC system register CPU interface Feb 9 19:16:29.951158 kernel: CPU features: detected: Spectre-v2 Feb 9 19:16:29.951174 kernel: CPU features: detected: Spectre-v3a Feb 9 19:16:29.951189 kernel: CPU features: detected: Spectre-BHB Feb 9 19:16:29.951204 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 19:16:29.951219 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 19:16:29.951234 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 19:16:29.951249 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 19:16:29.951268 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 19:16:29.951284 kernel: Policy zone: Normal Feb 9 19:16:29.951301 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:16:29.951318 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:16:29.951333 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:16:29.951349 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:16:29.951364 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:16:29.951379 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 19:16:29.951396 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 19:16:29.951412 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:16:29.951431 kernel: trace event string verifier disabled Feb 9 19:16:29.951446 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 19:16:29.951462 kernel: rcu: RCU event tracing is enabled. Feb 9 19:16:29.951478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:16:29.951494 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 19:16:29.951509 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:16:29.951525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:16:29.951540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:16:29.951556 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 19:16:29.951571 kernel: GICv3: 96 SPIs implemented Feb 9 19:16:29.951586 kernel: GICv3: 0 Extended SPIs implemented Feb 9 19:16:29.951601 kernel: GICv3: Distributor has no Range Selector support Feb 9 19:16:29.951621 kernel: Root IRQ handler: gic_handle_irq Feb 9 19:16:29.951636 kernel: GICv3: 16 PPIs implemented Feb 9 19:16:29.951651 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 19:16:29.951666 kernel: ACPI: SRAT not present Feb 9 19:16:29.951681 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 19:16:29.951696 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 19:16:29.951712 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 19:16:29.951727 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 19:16:29.951742 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 19:16:29.951757 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 19:16:29.951772 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 19:16:29.951792 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 19:16:29.951808 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 19:16:29.951885 kernel: Console: colour dummy device 80x25 Feb 9 19:16:29.951903 kernel: printk: console [tty1] enabled Feb 9 19:16:29.951919 kernel: ACPI: Core revision 20210730 Feb 9 19:16:29.951935 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 19:16:29.951951 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:16:29.951967 kernel: LSM: Security Framework initializing Feb 9 19:16:29.951983 kernel: SELinux: Initializing. Feb 9 19:16:29.952000 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:16:29.952023 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:16:29.952039 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:16:29.952054 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 19:16:29.952070 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 19:16:29.952085 kernel: Remapping and enabling EFI services. Feb 9 19:16:29.952101 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:16:29.952116 kernel: Detected PIPT I-cache on CPU1 Feb 9 19:16:29.952132 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 19:16:29.952148 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 19:16:29.952168 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 19:16:29.952183 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:16:29.952199 kernel: SMP: Total of 2 processors activated. Feb 9 19:16:29.952214 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 19:16:29.952230 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 19:16:29.952245 kernel: CPU features: detected: CRC32 instructions Feb 9 19:16:29.952261 kernel: CPU: All CPU(s) started at EL1 Feb 9 19:16:29.952276 kernel: alternatives: patching kernel code Feb 9 19:16:29.952292 kernel: devtmpfs: initialized Feb 9 19:16:29.952311 kernel: KASLR disabled due to lack of seed Feb 9 19:16:29.952328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:16:29.952344 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:16:29.952372 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:16:29.952391 kernel: SMBIOS 3.0.0 present. Feb 9 19:16:29.952407 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 19:16:29.952424 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:16:29.952440 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 19:16:29.952456 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 19:16:29.952473 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 19:16:29.952489 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:16:29.952506 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Feb 9 19:16:29.952526 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:16:29.952542 kernel: cpuidle: using governor menu Feb 9 19:16:29.952559 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 19:16:29.952575 kernel: ASID allocator initialised with 32768 entries Feb 9 19:16:29.952591 kernel: ACPI: bus type PCI registered Feb 9 19:16:29.952611 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:16:29.952628 kernel: Serial: AMBA PL011 UART driver Feb 9 19:16:29.952644 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:16:29.952660 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 19:16:29.952676 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:16:29.952693 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 19:16:29.952709 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:16:29.952725 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 19:16:29.952741 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:16:29.952761 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:16:29.952777 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:16:29.952794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:16:29.952810 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:16:29.954916 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:16:29.954946 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:16:29.954964 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:16:29.954981 kernel: ACPI: Interpreter enabled Feb 9 19:16:29.954997 kernel: ACPI: Using GIC for interrupt routing Feb 9 19:16:29.955023 kernel: ACPI: MCFG table detected, 1 entries Feb 9 19:16:29.955040 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 19:16:29.955375 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:16:29.955579 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 19:16:29.955776 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 19:16:29.956035 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 19:16:29.956236 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 19:16:29.956266 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 19:16:29.956283 kernel: acpiphp: Slot [1] registered Feb 9 19:16:29.956300 kernel: acpiphp: Slot [2] registered Feb 9 19:16:29.956316 kernel: acpiphp: Slot [3] registered Feb 9 19:16:29.956333 kernel: acpiphp: Slot [4] registered Feb 9 19:16:29.956349 kernel: acpiphp: Slot [5] registered Feb 9 19:16:29.956365 kernel: acpiphp: Slot [6] registered Feb 9 19:16:29.956381 kernel: acpiphp: Slot [7] registered Feb 9 19:16:29.956397 kernel: acpiphp: Slot [8] registered Feb 9 19:16:29.956417 kernel: acpiphp: Slot [9] registered Feb 9 19:16:29.956434 kernel: acpiphp: Slot [10] registered Feb 9 19:16:29.956450 kernel: acpiphp: Slot [11] registered Feb 9 19:16:29.956466 kernel: acpiphp: Slot [12] registered Feb 9 19:16:29.956482 kernel: acpiphp: Slot [13] registered Feb 9 19:16:29.956498 kernel: acpiphp: Slot [14] registered Feb 9 19:16:29.956515 kernel: acpiphp: Slot [15] registered Feb 9 19:16:29.956532 kernel: acpiphp: Slot [16] registered Feb 9 19:16:29.956548 kernel: acpiphp: Slot [17] registered Feb 9 19:16:29.956564 kernel: acpiphp: Slot [18] registered Feb 9 19:16:29.956584 kernel: acpiphp: Slot [19] registered Feb 9 19:16:29.956601 kernel: acpiphp: Slot [20] registered Feb 9 19:16:29.956617 kernel: acpiphp: Slot [21] registered Feb 9 19:16:29.956633 kernel: acpiphp: Slot [22] registered Feb 9 19:16:29.956650 kernel: acpiphp: Slot [23] registered Feb 9 19:16:29.956666 kernel: acpiphp: Slot [24] registered Feb 9 19:16:29.956684 kernel: acpiphp: Slot [25] registered Feb 9 19:16:29.956700 kernel: acpiphp: Slot [26] registered Feb 9 19:16:29.956717 kernel: acpiphp: Slot [27] registered Feb 9 19:16:29.956738 kernel: acpiphp: Slot [28] registered Feb 9 19:16:29.956754 kernel: acpiphp: Slot [29] registered Feb 9 19:16:29.956770 kernel: acpiphp: Slot [30] registered Feb 9 19:16:29.956786 kernel: acpiphp: Slot [31] registered Feb 9 19:16:29.956802 kernel: PCI host bridge to bus 0000:00 Feb 9 19:16:29.958919 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 19:16:29.959115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 19:16:29.959293 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 19:16:29.959477 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 19:16:29.959704 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 19:16:29.959943 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 19:16:29.960148 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 19:16:29.960359 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:16:29.960557 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 19:16:29.960761 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:16:29.960996 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:16:29.961202 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 19:16:29.961406 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 19:16:29.961607 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 19:16:29.961813 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:16:29.966130 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 19:16:29.966347 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 19:16:29.966573 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 19:16:29.966788 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 19:16:29.967033 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 19:16:29.967219 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 19:16:29.967404 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 19:16:29.967588 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 19:16:29.967617 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 19:16:29.967635 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 19:16:29.967652 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 19:16:29.967668 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 19:16:29.967685 kernel: iommu: Default domain type: Translated Feb 9 19:16:29.967701 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 19:16:29.967717 kernel: vgaarb: loaded Feb 9 19:16:29.967734 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:16:29.967750 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:16:29.967770 kernel: PTP clock support registered Feb 9 19:16:29.967787 kernel: Registered efivars operations Feb 9 19:16:29.967803 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 19:16:29.967837 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:16:29.967856 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:16:29.967873 kernel: pnp: PnP ACPI init Feb 9 19:16:29.968096 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 19:16:29.968121 kernel: pnp: PnP ACPI: found 1 devices Feb 9 19:16:29.968138 kernel: NET: Registered PF_INET protocol family Feb 9 19:16:29.968160 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:16:29.968177 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:16:29.968194 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:16:29.968210 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:16:29.968227 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:16:29.968244 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:16:29.968260 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:16:29.968276 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:16:29.968293 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:16:29.968313 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:16:29.968330 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 19:16:29.968347 kernel: kvm [1]: HYP mode not available Feb 9 19:16:29.968363 kernel: Initialise system trusted keyrings Feb 9 19:16:29.968380 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:16:29.968396 kernel: Key type asymmetric registered Feb 9 19:16:29.968412 kernel: Asymmetric key parser 'x509' registered Feb 9 19:16:29.968428 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:16:29.968444 kernel: io scheduler mq-deadline registered Feb 9 19:16:29.968465 kernel: io scheduler kyber registered Feb 9 19:16:29.968481 kernel: io scheduler bfq registered Feb 9 19:16:29.968689 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 19:16:29.968715 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 19:16:29.968732 kernel: ACPI: button: Power Button [PWRB] Feb 9 19:16:29.968748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:16:29.968765 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 19:16:29.968988 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 19:16:29.969017 kernel: printk: console [ttyS0] disabled Feb 9 19:16:29.969035 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 19:16:29.969051 kernel: printk: console [ttyS0] enabled Feb 9 19:16:29.969067 kernel: printk: bootconsole [uart0] disabled Feb 9 19:16:29.969083 kernel: thunder_xcv, ver 1.0 Feb 9 19:16:29.969099 kernel: thunder_bgx, ver 1.0 Feb 9 19:16:29.969115 kernel: nicpf, ver 1.0 Feb 9 19:16:29.969131 kernel: nicvf, ver 1.0 Feb 9 19:16:29.969339 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 19:16:29.969537 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T19:16:29 UTC (1707506189) Feb 9 19:16:29.969560 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:16:29.969577 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:16:29.969593 kernel: Segment Routing with IPv6 Feb 9 19:16:29.969610 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:16:29.969626 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:16:29.969642 kernel: Key type dns_resolver registered Feb 9 19:16:29.969658 kernel: registered taskstats version 1 Feb 9 19:16:29.969679 kernel: Loading compiled-in X.509 certificates Feb 9 19:16:29.969696 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 19:16:29.969712 kernel: Key type .fscrypt registered Feb 9 19:16:29.969728 kernel: Key type fscrypt-provisioning registered Feb 9 19:16:29.969744 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:16:29.969760 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:16:29.969776 kernel: ima: No architecture policies found Feb 9 19:16:29.969793 kernel: Freeing unused kernel memory: 34688K Feb 9 19:16:29.969809 kernel: Run /init as init process Feb 9 19:16:29.969848 kernel: with arguments: Feb 9 19:16:29.969866 kernel: /init Feb 9 19:16:29.969882 kernel: with environment: Feb 9 19:16:29.969898 kernel: HOME=/ Feb 9 19:16:29.969914 kernel: TERM=linux Feb 9 19:16:29.969930 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:16:29.969951 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:16:29.969971 systemd[1]: Detected virtualization amazon. Feb 9 19:16:29.969994 systemd[1]: Detected architecture arm64. Feb 9 19:16:29.970011 systemd[1]: Running in initrd. Feb 9 19:16:29.970029 systemd[1]: No hostname configured, using default hostname. Feb 9 19:16:29.970046 systemd[1]: Hostname set to . Feb 9 19:16:29.970064 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:16:29.970082 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:16:29.970099 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:16:29.970116 systemd[1]: Reached target cryptsetup.target. Feb 9 19:16:29.970138 systemd[1]: Reached target paths.target. Feb 9 19:16:29.970155 systemd[1]: Reached target slices.target. Feb 9 19:16:29.970173 systemd[1]: Reached target swap.target. Feb 9 19:16:29.970190 systemd[1]: Reached target timers.target. Feb 9 19:16:29.970208 systemd[1]: Listening on iscsid.socket. Feb 9 19:16:29.970226 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:16:29.970243 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:16:29.970261 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:16:29.970283 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:16:29.970301 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:16:29.970318 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:16:29.970336 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:16:29.970353 systemd[1]: Reached target sockets.target. Feb 9 19:16:29.970371 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:16:29.970388 systemd[1]: Finished network-cleanup.service. Feb 9 19:16:29.970406 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:16:29.970423 systemd[1]: Starting systemd-journald.service... Feb 9 19:16:29.970445 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:16:29.970463 systemd[1]: Starting systemd-resolved.service... Feb 9 19:16:29.970480 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:16:29.970498 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:16:29.970520 systemd-journald[308]: Journal started Feb 9 19:16:29.981940 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2c947528b8a15f1af886c17e9503f1) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:16:29.982020 kernel: audit: type=1130 audit(1707506189.968:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:29.982057 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:16:29.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:29.953344 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 19:16:29.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.008837 kernel: audit: type=1130 audit(1707506189.993:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.008884 systemd[1]: Started systemd-journald.service. Feb 9 19:16:30.008912 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:16:30.012895 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 19:16:30.016859 kernel: Bridge firewalling registered Feb 9 19:16:30.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.022631 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:16:30.036932 kernel: audit: type=1130 audit(1707506190.020:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.051854 kernel: audit: type=1130 audit(1707506190.035:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.051920 kernel: SCSI subsystem initialized Feb 9 19:16:30.053143 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:16:30.067632 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:16:30.091658 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:16:30.091732 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:16:30.094428 systemd-resolved[310]: Positive Trust Anchors: Feb 9 19:16:30.094456 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:16:30.094516 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:16:30.101842 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:16:30.103808 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:16:30.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.123285 kernel: audit: type=1130 audit(1707506190.110:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.119367 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:16:30.138081 kernel: audit: type=1130 audit(1707506190.126:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.123156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:16:30.147707 dracut-cmdline[326]: dracut-dracut-053 Feb 9 19:16:30.151791 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 19:16:30.156660 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:16:30.154043 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:16:30.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.191681 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:16:30.205410 kernel: audit: type=1130 audit(1707506190.189:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.223876 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:16:30.239950 kernel: audit: type=1130 audit(1707506190.226:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.296851 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:16:30.310921 kernel: iscsi: registered transport (tcp) Feb 9 19:16:30.334871 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:16:30.334941 kernel: QLogic iSCSI HBA Driver Feb 9 19:16:30.554679 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 19:16:30.558633 kernel: random: crng init done Feb 9 19:16:30.560172 systemd[1]: Started systemd-resolved.service. Feb 9 19:16:30.573412 kernel: audit: type=1130 audit(1707506190.561:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.562407 systemd[1]: Reached target nss-lookup.target. Feb 9 19:16:30.585455 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:16:30.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:30.589978 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:16:30.657857 kernel: raid6: neonx8 gen() 6224 MB/s Feb 9 19:16:30.673842 kernel: raid6: neonx8 xor() 4586 MB/s Feb 9 19:16:30.691847 kernel: raid6: neonx4 gen() 6540 MB/s Feb 9 19:16:30.709847 kernel: raid6: neonx4 xor() 4679 MB/s Feb 9 19:16:30.727846 kernel: raid6: neonx2 gen() 5786 MB/s Feb 9 19:16:30.745846 kernel: raid6: neonx2 xor() 4371 MB/s Feb 9 19:16:30.763848 kernel: raid6: neonx1 gen() 4470 MB/s Feb 9 19:16:30.781847 kernel: raid6: neonx1 xor() 3587 MB/s Feb 9 19:16:30.799846 kernel: raid6: int64x8 gen() 3440 MB/s Feb 9 19:16:30.817846 kernel: raid6: int64x8 xor() 2055 MB/s Feb 9 19:16:30.835846 kernel: raid6: int64x4 gen() 3842 MB/s Feb 9 19:16:30.853846 kernel: raid6: int64x4 xor() 2170 MB/s Feb 9 19:16:30.871846 kernel: raid6: int64x2 gen() 3610 MB/s Feb 9 19:16:30.889846 kernel: raid6: int64x2 xor() 1929 MB/s Feb 9 19:16:30.907846 kernel: raid6: int64x1 gen() 2759 MB/s Feb 9 19:16:30.927310 kernel: raid6: int64x1 xor() 1406 MB/s Feb 9 19:16:30.927340 kernel: raid6: using algorithm neonx4 gen() 6540 MB/s Feb 9 19:16:30.927364 kernel: raid6: .... xor() 4679 MB/s, rmw enabled Feb 9 19:16:30.929102 kernel: raid6: using neon recovery algorithm Feb 9 19:16:30.947859 kernel: xor: measuring software checksum speed Feb 9 19:16:30.949847 kernel: 8regs : 9333 MB/sec Feb 9 19:16:30.952847 kernel: 32regs : 11107 MB/sec Feb 9 19:16:30.956620 kernel: arm64_neon : 9614 MB/sec Feb 9 19:16:30.956652 kernel: xor: using function: 32regs (11107 MB/sec) Feb 9 19:16:31.046869 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 19:16:31.063535 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:16:31.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:31.065000 audit: BPF prog-id=7 op=LOAD Feb 9 19:16:31.065000 audit: BPF prog-id=8 op=LOAD Feb 9 19:16:31.067600 systemd[1]: Starting systemd-udevd.service... Feb 9 19:16:31.095613 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 19:16:31.106504 systemd[1]: Started systemd-udevd.service. Feb 9 19:16:31.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:31.116415 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:16:31.146791 dracut-pre-trigger[522]: rd.md=0: removing MD RAID activation Feb 9 19:16:31.206164 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:16:31.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:31.212698 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:16:31.319489 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:16:31.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:31.439510 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 19:16:31.439574 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 19:16:31.457170 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 19:16:31.457247 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:16:31.462338 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:16:31.462655 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:16:31.472110 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1a:c1:ba:99:21 Feb 9 19:16:31.475854 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:16:31.481968 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:16:31.482012 kernel: GPT:9289727 != 16777215 Feb 9 19:16:31.482036 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:16:31.484106 kernel: GPT:9289727 != 16777215 Feb 9 19:16:31.485371 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:16:31.487266 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:31.492395 (udev-worker)[554]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:31.579855 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (565) Feb 9 19:16:31.595952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:16:31.646656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:16:31.698810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:16:31.703995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:16:31.726650 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:16:31.742857 systemd[1]: Starting disk-uuid.service... Feb 9 19:16:31.754658 disk-uuid[667]: Primary Header is updated. Feb 9 19:16:31.754658 disk-uuid[667]: Secondary Entries is updated. Feb 9 19:16:31.754658 disk-uuid[667]: Secondary Header is updated. Feb 9 19:16:31.770736 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:31.775859 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:31.794866 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:32.787764 disk-uuid[668]: The operation has completed successfully. Feb 9 19:16:32.790848 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:16:32.944563 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:16:32.944763 systemd[1]: Finished disk-uuid.service. Feb 9 19:16:32.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:32.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:32.971097 systemd[1]: Starting verity-setup.service... Feb 9 19:16:33.009914 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 19:16:33.091543 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:16:33.097092 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:16:33.101569 systemd[1]: Finished verity-setup.service. Feb 9 19:16:33.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.186128 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:16:33.184694 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:16:33.186536 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:16:33.191985 systemd[1]: Starting ignition-setup.service... Feb 9 19:16:33.199469 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:16:33.221211 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:16:33.221277 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:16:33.223913 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:16:33.234878 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:16:33.253170 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:16:33.279027 systemd[1]: Finished ignition-setup.service. Feb 9 19:16:33.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.282146 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:16:33.349783 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:16:33.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.352000 audit: BPF prog-id=9 op=LOAD Feb 9 19:16:33.355431 systemd[1]: Starting systemd-networkd.service... Feb 9 19:16:33.402937 systemd-networkd[1191]: lo: Link UP Feb 9 19:16:33.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.402950 systemd-networkd[1191]: lo: Gained carrier Feb 9 19:16:33.403884 systemd-networkd[1191]: Enumeration completed Feb 9 19:16:33.404324 systemd-networkd[1191]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:16:33.404970 systemd[1]: Started systemd-networkd.service. Feb 9 19:16:33.406897 systemd[1]: Reached target network.target. Feb 9 19:16:33.410744 systemd[1]: Starting iscsiuio.service... Feb 9 19:16:33.417547 systemd-networkd[1191]: eth0: Link UP Feb 9 19:16:33.417555 systemd-networkd[1191]: eth0: Gained carrier Feb 9 19:16:33.446939 systemd[1]: Started iscsiuio.service. Feb 9 19:16:33.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.450157 systemd[1]: Starting iscsid.service... Feb 9 19:16:33.451810 systemd-networkd[1191]: eth0: DHCPv4 address 172.31.24.80/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:16:33.463344 iscsid[1196]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:16:33.463344 iscsid[1196]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:16:33.463344 iscsid[1196]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:16:33.463344 iscsid[1196]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:16:33.463344 iscsid[1196]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:16:33.483412 iscsid[1196]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:16:33.488163 systemd[1]: Started iscsid.service. Feb 9 19:16:33.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.491133 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:16:33.516231 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:16:33.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.516706 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:16:33.517377 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:16:33.517699 systemd[1]: Reached target remote-fs.target. Feb 9 19:16:33.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.520318 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:16:33.541694 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:16:33.832757 ignition[1135]: Ignition 2.14.0 Feb 9 19:16:33.832785 ignition[1135]: Stage: fetch-offline Feb 9 19:16:33.833111 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:33.833171 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:33.850346 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:33.853070 ignition[1135]: Ignition finished successfully Feb 9 19:16:33.856319 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:16:33.870645 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:16:33.870685 kernel: audit: type=1130 audit(1707506193.854:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:33.870614 systemd[1]: Starting ignition-fetch.service... Feb 9 19:16:33.885644 ignition[1215]: Ignition 2.14.0 Feb 9 19:16:33.885670 ignition[1215]: Stage: fetch Feb 9 19:16:33.885985 ignition[1215]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:33.886049 ignition[1215]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:33.899403 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:33.901687 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:33.919273 ignition[1215]: INFO : PUT result: OK Feb 9 19:16:33.923191 ignition[1215]: DEBUG : parsed url from cmdline: "" Feb 9 19:16:33.924982 ignition[1215]: INFO : no config URL provided Feb 9 19:16:33.924982 ignition[1215]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:16:33.924982 ignition[1215]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:16:33.931127 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:33.931127 ignition[1215]: INFO : PUT result: OK Feb 9 19:16:33.931127 ignition[1215]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:16:33.937361 ignition[1215]: INFO : GET result: OK Feb 9 19:16:33.939020 ignition[1215]: DEBUG : parsing config with SHA512: 13a01c1600ba2aefa9be02f4bb431869b2ba16506d069277d2999cb734d3effc97ac3eb080b394ce25b92507741daf26fce2d3bf99b1b3bea55cefd9be534b76 Feb 9 19:16:34.006720 unknown[1215]: fetched base config from "system" Feb 9 19:16:34.006750 unknown[1215]: fetched base config from "system" Feb 9 19:16:34.006766 unknown[1215]: fetched user config from "aws" Feb 9 19:16:34.012846 ignition[1215]: fetch: fetch complete Feb 9 19:16:34.012873 ignition[1215]: fetch: fetch passed Feb 9 19:16:34.012967 ignition[1215]: Ignition finished successfully Feb 9 19:16:34.019564 systemd[1]: Finished ignition-fetch.service. Feb 9 19:16:34.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.024148 systemd[1]: Starting ignition-kargs.service... Feb 9 19:16:34.033829 kernel: audit: type=1130 audit(1707506194.020:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.046995 ignition[1221]: Ignition 2.14.0 Feb 9 19:16:34.048599 ignition[1221]: Stage: kargs Feb 9 19:16:34.050192 ignition[1221]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:34.052467 ignition[1221]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:34.062769 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:34.065090 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:34.067503 ignition[1221]: INFO : PUT result: OK Feb 9 19:16:34.073263 ignition[1221]: kargs: kargs passed Feb 9 19:16:34.073368 ignition[1221]: Ignition finished successfully Feb 9 19:16:34.077725 systemd[1]: Finished ignition-kargs.service. Feb 9 19:16:34.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.088103 systemd[1]: Starting ignition-disks.service... Feb 9 19:16:34.092878 kernel: audit: type=1130 audit(1707506194.076:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.102011 ignition[1227]: Ignition 2.14.0 Feb 9 19:16:34.102040 ignition[1227]: Stage: disks Feb 9 19:16:34.102331 ignition[1227]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:34.102384 ignition[1227]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:34.126922 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:34.129418 ignition[1227]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:34.132844 ignition[1227]: INFO : PUT result: OK Feb 9 19:16:34.139925 ignition[1227]: disks: disks passed Feb 9 19:16:34.141528 ignition[1227]: Ignition finished successfully Feb 9 19:16:34.146996 systemd[1]: Finished ignition-disks.service. Feb 9 19:16:34.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.156043 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:16:34.160552 kernel: audit: type=1130 audit(1707506194.149:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.160459 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:16:34.162171 systemd[1]: Reached target local-fs.target. Feb 9 19:16:34.165077 systemd[1]: Reached target sysinit.target. Feb 9 19:16:34.167879 systemd[1]: Reached target basic.target. Feb 9 19:16:34.173496 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:16:34.210087 systemd-fsck[1235]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 19:16:34.219440 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:16:34.229934 kernel: audit: type=1130 audit(1707506194.218:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.223285 systemd[1]: Mounting sysroot.mount... Feb 9 19:16:34.243858 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:16:34.245860 systemd[1]: Mounted sysroot.mount. Feb 9 19:16:34.248720 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:16:34.268709 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:16:34.275941 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:16:34.276032 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:16:34.276459 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:16:34.282109 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:16:34.303903 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:16:34.305807 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:16:34.325239 initrd-setup-root[1257]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:16:34.336856 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1252) Feb 9 19:16:34.343057 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:16:34.343119 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:16:34.343146 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:16:34.348117 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:16:34.355866 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:16:34.360475 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:16:34.365792 initrd-setup-root[1291]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:16:34.374138 initrd-setup-root[1299]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:16:34.576949 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:16:34.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.581445 systemd[1]: Starting ignition-mount.service... Feb 9 19:16:34.591641 kernel: audit: type=1130 audit(1707506194.579:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.591717 systemd[1]: Starting sysroot-boot.service... Feb 9 19:16:34.604614 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:16:34.604790 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:16:34.639736 ignition[1319]: INFO : Ignition 2.14.0 Feb 9 19:16:34.639736 ignition[1319]: INFO : Stage: mount Feb 9 19:16:34.639736 ignition[1319]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:34.639736 ignition[1319]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:34.645260 systemd[1]: Finished sysroot-boot.service. Feb 9 19:16:34.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.660902 kernel: audit: type=1130 audit(1707506194.651:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.665939 ignition[1319]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:34.668436 ignition[1319]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:34.671707 ignition[1319]: INFO : PUT result: OK Feb 9 19:16:34.676880 ignition[1319]: INFO : mount: mount passed Feb 9 19:16:34.678921 ignition[1319]: INFO : Ignition finished successfully Feb 9 19:16:34.681727 systemd[1]: Finished ignition-mount.service. Feb 9 19:16:34.686595 systemd[1]: Starting ignition-files.service... Feb 9 19:16:34.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.700325 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:16:34.711239 kernel: audit: type=1130 audit(1707506194.683:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:34.719855 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1327) Feb 9 19:16:34.725275 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:16:34.725309 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:16:34.725334 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:16:34.734855 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:16:34.739187 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:16:34.757970 ignition[1346]: INFO : Ignition 2.14.0 Feb 9 19:16:34.757970 ignition[1346]: INFO : Stage: files Feb 9 19:16:34.761248 ignition[1346]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:34.761248 ignition[1346]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:34.777770 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:34.780289 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:34.783539 ignition[1346]: INFO : PUT result: OK Feb 9 19:16:34.789729 ignition[1346]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:16:34.793680 ignition[1346]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:16:34.793680 ignition[1346]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:16:34.833810 ignition[1346]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:16:34.836604 ignition[1346]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:16:34.840234 unknown[1346]: wrote ssh authorized keys file for user: core Feb 9 19:16:34.842703 ignition[1346]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:16:34.846140 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:16:34.849568 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:16:34.852977 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:16:34.856702 ignition[1346]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 19:16:34.903139 ignition[1346]: INFO : GET result: OK Feb 9 19:16:35.002863 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:16:35.007108 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:16:35.007108 ignition[1346]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 19:16:35.019071 systemd-networkd[1191]: eth0: Gained IPv6LL Feb 9 19:16:35.446595 ignition[1346]: INFO : GET result: OK Feb 9 19:16:35.758719 ignition[1346]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 19:16:35.763653 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:16:35.763653 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:16:35.763653 ignition[1346]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 19:16:36.174005 ignition[1346]: INFO : GET result: OK Feb 9 19:16:36.555544 ignition[1346]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 19:16:36.560188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:16:36.560188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:16:36.560188 ignition[1346]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 19:16:36.684120 ignition[1346]: INFO : GET result: OK Feb 9 19:16:38.218167 ignition[1346]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 19:16:38.223129 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:16:38.223129 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:16:38.223129 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:16:38.223129 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:16:38.236513 ignition[1346]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:38.245975 ignition[1346]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1225826832" Feb 9 19:16:38.248882 ignition[1346]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1225826832": device or resource busy Feb 9 19:16:38.251988 ignition[1346]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1225826832", trying btrfs: device or resource busy Feb 9 19:16:38.251988 ignition[1346]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1225826832" Feb 9 19:16:38.262307 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1351) Feb 9 19:16:38.266864 ignition[1346]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1225826832" Feb 9 19:16:38.266864 ignition[1346]: INFO : op(3): [started] unmounting "/mnt/oem1225826832" Feb 9 19:16:38.266864 ignition[1346]: INFO : op(3): [finished] unmounting "/mnt/oem1225826832" Feb 9 19:16:38.266864 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:16:38.266864 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:16:38.266864 ignition[1346]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 19:16:38.274548 systemd[1]: mnt-oem1225826832.mount: Deactivated successfully. Feb 9 19:16:38.339833 ignition[1346]: INFO : GET result: OK Feb 9 19:16:38.911963 ignition[1346]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 19:16:38.916623 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:16:38.916623 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:16:38.916623 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:16:38.916623 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:16:38.916623 ignition[1346]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 19:16:38.986886 ignition[1346]: INFO : GET result: OK Feb 9 19:16:39.558684 ignition[1346]: DEBUG : file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 19:16:39.563777 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:16:39.563777 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:16:39.563777 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:16:39.563777 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:16:39.563777 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:16:39.563777 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:16:39.563777 ignition[1346]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 19:16:39.877479 ignition[1346]: INFO : GET result: OK Feb 9 19:16:40.005172 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:16:40.017198 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:16:40.017198 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:16:40.017198 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:16:40.017198 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:16:40.017198 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:16:40.017198 ignition[1346]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:40.017198 ignition[1346]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem423889097" Feb 9 19:16:40.017198 ignition[1346]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem423889097": device or resource busy Feb 9 19:16:40.017198 ignition[1346]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem423889097", trying btrfs: device or resource busy Feb 9 19:16:40.017198 ignition[1346]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem423889097" Feb 9 19:16:40.102892 ignition[1346]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem423889097" Feb 9 19:16:40.102892 ignition[1346]: INFO : op(6): [started] unmounting "/mnt/oem423889097" Feb 9 19:16:40.102892 ignition[1346]: INFO : op(6): [finished] unmounting "/mnt/oem423889097" Feb 9 19:16:40.102892 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:16:40.102892 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:16:40.102892 ignition[1346]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:40.102892 ignition[1346]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3387993849" Feb 9 19:16:40.102892 ignition[1346]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3387993849": device or resource busy Feb 9 19:16:40.102892 ignition[1346]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3387993849", trying btrfs: device or resource busy Feb 9 19:16:40.102892 ignition[1346]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3387993849" Feb 9 19:16:40.102892 ignition[1346]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3387993849" Feb 9 19:16:40.102892 ignition[1346]: INFO : op(9): [started] unmounting "/mnt/oem3387993849" Feb 9 19:16:40.102892 ignition[1346]: INFO : op(9): [finished] unmounting "/mnt/oem3387993849" Feb 9 19:16:40.102892 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:16:40.102892 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:16:40.102892 ignition[1346]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:16:40.102892 ignition[1346]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem784253548" Feb 9 19:16:40.102892 ignition[1346]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem784253548": device or resource busy Feb 9 19:16:40.102892 ignition[1346]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem784253548", trying btrfs: device or resource busy Feb 9 19:16:40.102892 ignition[1346]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem784253548" Feb 9 19:16:40.225126 kernel: audit: type=1130 audit(1707506200.157:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.027764 systemd[1]: mnt-oem423889097.mount: Deactivated successfully. Feb 9 19:16:40.228951 ignition[1346]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem784253548" Feb 9 19:16:40.228951 ignition[1346]: INFO : op(c): [started] unmounting "/mnt/oem784253548" Feb 9 19:16:40.228951 ignition[1346]: INFO : op(c): [finished] unmounting "/mnt/oem784253548" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(16): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(16): op(17): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(16): op(17): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(16): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(19): [started] processing unit "prepare-critools.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(19): op(1a): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(19): [finished] processing unit "prepare-critools.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(1b): [started] processing unit "prepare-helm.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(1b): [finished] processing unit "prepare-helm.service" Feb 9 19:16:40.228951 ignition[1346]: INFO : files: op(1d): [started] processing unit "containerd.service" Feb 9 19:16:40.448361 kernel: audit: type=1130 audit(1707506200.227:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.451553 kernel: audit: type=1131 audit(1707506200.235:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.451587 kernel: audit: type=1130 audit(1707506200.252:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.451614 kernel: audit: type=1130 audit(1707506200.298:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.451639 kernel: audit: type=1131 audit(1707506200.298:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.451669 kernel: audit: type=1130 audit(1707506200.330:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.453176 kernel: audit: type=1130 audit(1707506200.382:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.453205 kernel: audit: type=1131 audit(1707506200.382:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.453229 kernel: audit: type=1131 audit(1707506200.413:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.065911 systemd[1]: mnt-oem3387993849.mount: Deactivated successfully. Feb 9 19:16:40.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(1d): op(1e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(1d): op(1e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(1d): [finished] processing unit "containerd.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(1f): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(1f): op(20): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(1f): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(21): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(21): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(25): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(26): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:16:40.470249 ignition[1346]: INFO : files: op(26): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:16:40.153763 systemd[1]: Finished ignition-files.service. Feb 9 19:16:40.539520 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:16:40.542858 ignition[1346]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:16:40.542858 ignition[1346]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:16:40.542858 ignition[1346]: INFO : files: files passed Feb 9 19:16:40.542858 ignition[1346]: INFO : Ignition finished successfully Feb 9 19:16:40.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.168676 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:16:40.570959 iscsid[1196]: iscsid shutting down. Feb 9 19:16:40.191166 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:16:40.195780 systemd[1]: Starting ignition-quench.service... Feb 9 19:16:40.226924 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:16:40.227117 systemd[1]: Finished ignition-quench.service. Feb 9 19:16:40.250872 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:16:40.254865 systemd[1]: Reached target ignition-complete.target. Feb 9 19:16:40.589708 ignition[1384]: INFO : Ignition 2.14.0 Feb 9 19:16:40.589708 ignition[1384]: INFO : Stage: umount Feb 9 19:16:40.589708 ignition[1384]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:16:40.589708 ignition[1384]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:16:40.270686 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:16:40.299729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:16:40.299967 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:16:40.301178 systemd[1]: Reached target initrd-fs.target. Feb 9 19:16:40.301312 systemd[1]: Reached target initrd.target. Feb 9 19:16:40.301693 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:16:40.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.610428 ignition[1384]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:16:40.610428 ignition[1384]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:16:40.304343 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:16:40.326670 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:16:40.333933 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:16:40.631391 ignition[1384]: INFO : PUT result: OK Feb 9 19:16:40.360672 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:16:40.360890 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:16:40.384173 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:16:40.401934 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:16:40.404487 systemd[1]: Stopped target timers.target. Feb 9 19:16:40.409958 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:16:40.410056 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:16:40.415281 systemd[1]: Stopped target initrd.target. Feb 9 19:16:40.426665 systemd[1]: Stopped target basic.target. Feb 9 19:16:40.430401 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:16:40.652739 ignition[1384]: INFO : umount: umount passed Feb 9 19:16:40.652739 ignition[1384]: INFO : Ignition finished successfully Feb 9 19:16:40.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.435327 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:16:40.440476 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:16:40.444455 systemd[1]: Stopped target remote-fs.target. Feb 9 19:16:40.449937 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:16:40.451596 systemd[1]: Stopped target sysinit.target. Feb 9 19:16:40.453204 systemd[1]: Stopped target local-fs.target. Feb 9 19:16:40.454801 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:16:40.456472 systemd[1]: Stopped target swap.target. Feb 9 19:16:40.457952 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:16:40.458046 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:16:40.459707 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:16:40.461386 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:16:40.461467 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:16:40.463201 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:16:40.463286 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:16:40.465341 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:16:40.465418 systemd[1]: Stopped ignition-files.service. Feb 9 19:16:40.468236 systemd[1]: Stopping ignition-mount.service... Feb 9 19:16:40.527205 systemd[1]: Stopping iscsid.service... Feb 9 19:16:40.528462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:16:40.528562 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:16:40.531484 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:16:40.559041 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:16:40.559167 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:16:40.562129 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:16:40.562231 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:16:40.564697 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:16:40.566863 systemd[1]: Stopped iscsid.service. Feb 9 19:16:40.573669 systemd[1]: Stopping iscsiuio.service... Feb 9 19:16:40.629898 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:16:40.641094 systemd[1]: Stopped iscsiuio.service. Feb 9 19:16:40.643300 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:16:40.643486 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:16:40.658057 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:16:40.688014 systemd[1]: Stopped ignition-mount.service. Feb 9 19:16:40.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.710254 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:16:40.710360 systemd[1]: Stopped ignition-disks.service. Feb 9 19:16:40.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.713638 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:16:40.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.715065 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:16:40.718255 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:16:40.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.718339 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:16:40.721301 systemd[1]: Stopped target network.target. Feb 9 19:16:40.727614 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:16:40.727716 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:16:40.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.733065 systemd[1]: Stopped target paths.target. Feb 9 19:16:40.733162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:16:40.742888 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:16:40.746184 systemd[1]: Stopped target slices.target. Feb 9 19:16:40.747724 systemd[1]: Stopped target sockets.target. Feb 9 19:16:40.750528 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:16:40.751905 systemd[1]: Closed iscsid.socket. Feb 9 19:16:40.756018 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:16:40.756105 systemd[1]: Closed iscsiuio.socket. Feb 9 19:16:40.759041 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:16:40.760292 systemd[1]: Stopped ignition-setup.service. Feb 9 19:16:40.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.763540 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:16:40.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.764919 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:16:40.768757 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:16:40.771699 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:16:40.774884 systemd-networkd[1191]: eth0: DHCPv6 lease lost Feb 9 19:16:40.778770 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:16:40.780967 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:16:40.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.785000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:16:40.786702 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:16:40.786954 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:16:40.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.792507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:16:40.792607 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:16:40.794000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:16:40.798713 systemd[1]: Stopping network-cleanup.service... Feb 9 19:16:40.804965 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:16:40.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.805096 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:16:40.808398 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:16:40.808481 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:16:40.810309 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:16:40.810389 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:16:40.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.814349 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:16:40.826559 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:16:40.826860 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:16:40.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.829779 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:16:40.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.829983 systemd[1]: Stopped network-cleanup.service. Feb 9 19:16:40.832629 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:16:40.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:40.832709 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:16:40.835042 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:16:40.835110 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:16:40.837119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:16:40.837210 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:16:40.840201 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:16:40.892000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:16:40.892000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:16:40.892000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:16:40.892000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:16:40.892000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:16:40.840283 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:16:40.841905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:16:40.841978 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:16:40.845331 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:16:40.859357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:16:40.859475 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:16:40.863862 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:16:40.864072 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:16:40.934958 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Feb 9 19:16:40.866652 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:16:40.869650 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:16:40.892704 systemd[1]: Switching root. Feb 9 19:16:40.947434 systemd-journald[308]: Journal stopped Feb 9 19:16:46.406273 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:16:46.406396 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:16:46.406429 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:16:46.406480 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:16:46.406523 kernel: SELinux: policy capability open_perms=1 Feb 9 19:16:46.406556 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:16:46.406586 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:16:46.408987 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:16:46.413754 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:16:46.413789 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:16:46.413847 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:16:46.413886 systemd[1]: Successfully loaded SELinux policy in 106.430ms. Feb 9 19:16:46.413949 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.231ms. Feb 9 19:16:46.413984 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:16:46.414017 systemd[1]: Detected virtualization amazon. Feb 9 19:16:46.414057 systemd[1]: Detected architecture arm64. Feb 9 19:16:46.414092 systemd[1]: Detected first boot. Feb 9 19:16:46.414124 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:16:46.414155 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:16:46.414192 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:16:46.414227 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:16:46.414272 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:16:46.414305 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:16:46.414345 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:16:46.414378 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:16:46.414411 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:16:46.414443 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:16:46.414497 systemd[1]: Created slice system-getty.slice. Feb 9 19:16:46.414531 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:16:46.414562 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:16:46.414596 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:16:46.414643 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:16:46.414677 systemd[1]: Created slice user.slice. Feb 9 19:16:46.414709 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:16:46.414738 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:16:46.414769 systemd[1]: Set up automount boot.automount. Feb 9 19:16:46.414803 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:16:46.414857 systemd[1]: Reached target integritysetup.target. Feb 9 19:16:46.414892 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:16:46.414924 systemd[1]: Reached target remote-fs.target. Feb 9 19:16:46.414957 systemd[1]: Reached target slices.target. Feb 9 19:16:46.414988 systemd[1]: Reached target swap.target. Feb 9 19:16:46.415017 systemd[1]: Reached target torcx.target. Feb 9 19:16:46.415049 systemd[1]: Reached target veritysetup.target. Feb 9 19:16:46.415085 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:16:46.415115 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:16:46.415145 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 19:16:46.415176 kernel: audit: type=1400 audit(1707506206.006:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:16:46.415206 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:16:46.415236 kernel: audit: type=1335 audit(1707506206.006:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:16:46.415277 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:16:46.415314 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:16:46.415348 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:16:46.415381 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:16:46.415411 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:16:46.415444 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:16:46.415473 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:16:46.415503 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:16:46.415535 systemd[1]: Mounting media.mount... Feb 9 19:16:46.415564 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:16:46.415596 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:16:46.415628 systemd[1]: Mounting tmp.mount... Feb 9 19:16:46.415662 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:16:46.415694 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:16:46.415723 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:16:46.415752 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:16:46.415781 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:16:46.415810 systemd[1]: Starting modprobe@drm.service... Feb 9 19:16:46.416151 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:16:46.416183 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:16:46.416216 systemd[1]: Starting modprobe@loop.service... Feb 9 19:16:46.416273 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:16:46.416307 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:16:46.416338 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:16:46.416369 systemd[1]: Starting systemd-journald.service... Feb 9 19:16:46.416398 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:16:46.416431 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:16:46.416460 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:16:46.416489 kernel: fuse: init (API version 7.34) Feb 9 19:16:46.416520 kernel: loop: module loaded Feb 9 19:16:46.416555 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:16:46.416588 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:16:46.416621 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:16:46.416650 systemd[1]: Mounted media.mount. Feb 9 19:16:46.416679 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:16:46.416708 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:16:46.416738 systemd[1]: Mounted tmp.mount. Feb 9 19:16:46.416772 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:16:46.416836 kernel: audit: type=1130 audit(1707506206.282:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.416897 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:16:46.416931 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:16:46.416961 kernel: audit: type=1130 audit(1707506206.305:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.416990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:16:46.417025 kernel: audit: type=1131 audit(1707506206.305:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.417057 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:16:46.417087 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:16:46.417118 kernel: audit: type=1130 audit(1707506206.335:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.417147 systemd[1]: Finished modprobe@drm.service. Feb 9 19:16:46.417180 kernel: audit: type=1131 audit(1707506206.335:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.417209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:16:46.417244 kernel: audit: type=1130 audit(1707506206.356:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.417295 kernel: audit: type=1131 audit(1707506206.356:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.417328 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:16:46.417361 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:16:46.417392 kernel: audit: type=1305 audit(1707506206.388:97): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:16:46.417421 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:16:46.417450 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:16:46.417483 systemd-journald[1532]: Journal started Feb 9 19:16:46.422520 systemd-journald[1532]: Runtime Journal (/run/log/journal/ec2c947528b8a15f1af886c17e9503f1) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:16:46.422592 systemd[1]: Finished modprobe@loop.service. Feb 9 19:16:46.006000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:16:46.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.388000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:16:46.388000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff7176530 a2=4000 a3=1 items=0 ppid=1 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:46.388000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:16:46.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.440669 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:16:46.440739 systemd[1]: Started systemd-journald.service. Feb 9 19:16:46.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.438167 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:16:46.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.443404 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:16:46.446324 systemd[1]: Reached target network-pre.target. Feb 9 19:16:46.450472 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:16:46.454716 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:16:46.456469 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:16:46.461735 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:16:46.467592 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:16:46.471108 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:16:46.473841 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:16:46.476300 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:16:46.480949 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:16:46.490710 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:16:46.493347 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:16:46.526595 systemd-journald[1532]: Time spent on flushing to /var/log/journal/ec2c947528b8a15f1af886c17e9503f1 is 95.502ms for 1113 entries. Feb 9 19:16:46.526595 systemd-journald[1532]: System Journal (/var/log/journal/ec2c947528b8a15f1af886c17e9503f1) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:16:46.639910 systemd-journald[1532]: Received client request to flush runtime journal. Feb 9 19:16:46.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.531376 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:16:46.533384 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:16:46.564419 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:16:46.596085 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:16:46.600642 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:16:46.613948 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:16:46.618711 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:16:46.641572 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:16:46.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.647144 udevadm[1586]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:16:46.758147 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:16:46.762571 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:16:46.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:46.820092 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:16:46.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.428916 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:16:47.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.433184 systemd[1]: Starting systemd-udevd.service... Feb 9 19:16:47.474034 systemd-udevd[1594]: Using default interface naming scheme 'v252'. Feb 9 19:16:47.523306 systemd[1]: Started systemd-udevd.service. Feb 9 19:16:47.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.530160 systemd[1]: Starting systemd-networkd.service... Feb 9 19:16:47.557154 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:16:47.632984 (udev-worker)[1595]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:47.635656 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:16:47.648690 systemd[1]: Started systemd-userdbd.service. Feb 9 19:16:47.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.813347 systemd-networkd[1600]: lo: Link UP Feb 9 19:16:47.814437 systemd-networkd[1600]: lo: Gained carrier Feb 9 19:16:47.817410 systemd-networkd[1600]: Enumeration completed Feb 9 19:16:47.817760 systemd[1]: Started systemd-networkd.service. Feb 9 19:16:47.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:47.828412 systemd-networkd[1600]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:16:47.833901 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:16:47.834272 systemd-networkd[1600]: eth0: Link UP Feb 9 19:16:47.834799 systemd-networkd[1600]: eth0: Gained carrier Feb 9 19:16:47.859453 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:16:47.873043 systemd-networkd[1600]: eth0: DHCPv4 address 172.31.24.80/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:16:47.914855 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1601) Feb 9 19:16:48.035303 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:16:48.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.058466 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:16:48.061125 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:16:48.095156 lvm[1715]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:16:48.133511 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:16:48.135581 systemd[1]: Reached target cryptsetup.target. Feb 9 19:16:48.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.140436 systemd[1]: Starting lvm2-activation.service... Feb 9 19:16:48.150593 lvm[1717]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:16:48.188611 systemd[1]: Finished lvm2-activation.service. Feb 9 19:16:48.190558 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:16:48.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.196428 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:16:48.196491 systemd[1]: Reached target local-fs.target. Feb 9 19:16:48.198147 systemd[1]: Reached target machines.target. Feb 9 19:16:48.202201 systemd[1]: Starting ldconfig.service... Feb 9 19:16:48.205627 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:16:48.205768 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:16:48.208269 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:16:48.212032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:16:48.217303 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:16:48.219975 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:16:48.220141 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:16:48.222657 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:16:48.247122 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1720 (bootctl) Feb 9 19:16:48.249368 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:16:48.261553 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:16:48.269087 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:16:48.272729 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:16:48.276266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:16:48.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.388017 systemd-fsck[1729]: fsck.fat 4.2 (2021-01-31) Feb 9 19:16:48.388017 systemd-fsck[1729]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 19:16:48.390302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:16:48.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.395399 systemd[1]: Mounting boot.mount... Feb 9 19:16:48.428055 systemd[1]: Mounted boot.mount. Feb 9 19:16:48.457178 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:16:48.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.637770 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:16:48.642250 systemd[1]: Starting audit-rules.service... Feb 9 19:16:48.646384 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:16:48.651275 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:16:48.662126 systemd[1]: Starting systemd-resolved.service... Feb 9 19:16:48.670390 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:16:48.678202 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:16:48.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.696977 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:16:48.699058 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:16:48.717000 audit[1756]: SYSTEM_BOOT pid=1756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.722229 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:16:48.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.764250 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:16:48.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:16:48.825000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:16:48.825000 audit[1770]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff7d8e1c0 a2=420 a3=0 items=0 ppid=1747 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:16:48.825000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:16:48.827312 augenrules[1770]: No rules Feb 9 19:16:48.829008 systemd[1]: Finished audit-rules.service. Feb 9 19:16:48.885120 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:16:48.887053 systemd[1]: Reached target time-set.target. Feb 9 19:16:48.908067 systemd-resolved[1750]: Positive Trust Anchors: Feb 9 19:16:48.908987 systemd-resolved[1750]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:16:48.909206 systemd-resolved[1750]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:16:48.942176 systemd-resolved[1750]: Defaulting to hostname 'linux'. Feb 9 19:16:48.945415 systemd[1]: Started systemd-resolved.service. Feb 9 19:16:48.947237 systemd[1]: Reached target network.target. Feb 9 19:16:48.948902 systemd[1]: Reached target nss-lookup.target. Feb 9 19:16:48.953524 systemd-timesyncd[1754]: Contacted time server 204.93.207.12:123 (0.flatcar.pool.ntp.org). Feb 9 19:16:48.954563 systemd-timesyncd[1754]: Initial clock synchronization to Fri 2024-02-09 19:16:48.861792 UTC. Feb 9 19:16:48.970992 systemd-networkd[1600]: eth0: Gained IPv6LL Feb 9 19:16:48.973966 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:16:48.976247 systemd[1]: Reached target network-online.target. Feb 9 19:16:49.089652 ldconfig[1719]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:16:49.112554 systemd[1]: Finished ldconfig.service. Feb 9 19:16:49.116716 systemd[1]: Starting systemd-update-done.service... Feb 9 19:16:49.134424 systemd[1]: Finished systemd-update-done.service. Feb 9 19:16:49.136689 systemd[1]: Reached target sysinit.target. Feb 9 19:16:49.139409 systemd[1]: Started motdgen.path. Feb 9 19:16:49.141429 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:16:49.143940 systemd[1]: Started logrotate.timer. Feb 9 19:16:49.145570 systemd[1]: Started mdadm.timer. Feb 9 19:16:49.146925 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:16:49.148676 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:16:49.148724 systemd[1]: Reached target paths.target. Feb 9 19:16:49.150222 systemd[1]: Reached target timers.target. Feb 9 19:16:49.154907 systemd[1]: Listening on dbus.socket. Feb 9 19:16:49.158641 systemd[1]: Starting docker.socket... Feb 9 19:16:49.162348 systemd[1]: Listening on sshd.socket. Feb 9 19:16:49.164183 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:16:49.165184 systemd[1]: Listening on docker.socket. Feb 9 19:16:49.166873 systemd[1]: Reached target sockets.target. Feb 9 19:16:49.168489 systemd[1]: Reached target basic.target. Feb 9 19:16:49.170295 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:16:49.170382 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:16:49.170434 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:16:49.172913 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:16:49.180690 systemd[1]: Starting containerd.service... Feb 9 19:16:49.185581 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:16:49.190567 systemd[1]: Starting dbus.service... Feb 9 19:16:49.199406 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:16:49.222713 systemd[1]: Starting extend-filesystems.service... Feb 9 19:16:49.224428 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:16:49.227812 systemd[1]: Starting motdgen.service... Feb 9 19:16:49.243812 systemd[1]: Started nvidia.service. Feb 9 19:16:49.248288 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:16:49.253345 systemd[1]: Starting prepare-critools.service... Feb 9 19:16:49.263096 systemd[1]: Starting prepare-helm.service... Feb 9 19:16:49.276507 jq[1788]: false Feb 9 19:16:49.269571 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:16:49.275195 systemd[1]: Starting sshd-keygen.service... Feb 9 19:16:49.289094 systemd[1]: Starting systemd-logind.service... Feb 9 19:16:49.328087 dbus-daemon[1787]: [system] SELinux support is enabled Feb 9 19:16:49.290873 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1 Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1p1 Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1p2 Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1p3 Feb 9 19:16:49.349509 extend-filesystems[1791]: Found usr Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1p4 Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1p6 Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1p7 Feb 9 19:16:49.349509 extend-filesystems[1791]: Found nvme0n1p9 Feb 9 19:16:49.349509 extend-filesystems[1791]: Checking size of /dev/nvme0n1p9 Feb 9 19:16:49.291011 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:16:49.356386 dbus-daemon[1787]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1600 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:16:49.304112 systemd[1]: Starting update-engine.service... Feb 9 19:16:49.313753 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:16:49.524648 extend-filesystems[1791]: Resized partition /dev/nvme0n1p9 Feb 9 19:16:49.322043 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:16:49.529368 jq[1809]: true Feb 9 19:16:49.323483 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:16:49.587293 extend-filesystems[1842]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:16:49.330639 systemd[1]: Started dbus.service. Feb 9 19:16:49.604055 tar[1816]: crictl Feb 9 19:16:49.335853 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:16:49.622382 tar[1818]: linux-arm64/helm Feb 9 19:16:49.335906 systemd[1]: Reached target system-config.target. Feb 9 19:16:49.630196 tar[1820]: ./ Feb 9 19:16:49.630196 tar[1820]: ./macvlan Feb 9 19:16:49.337930 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:16:49.337969 systemd[1]: Reached target user-config.target. Feb 9 19:16:49.363238 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:16:49.672374 jq[1833]: true Feb 9 19:16:49.392596 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:16:49.395335 systemd[1]: Finished motdgen.service. Feb 9 19:16:49.460326 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:16:49.460904 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:16:49.707022 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:16:49.820888 amazon-ssm-agent[1783]: 2024/02/09 19:16:49 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:16:49.831544 amazon-ssm-agent[1783]: Initializing new seelog logger Feb 9 19:16:49.841137 amazon-ssm-agent[1783]: New Seelog Logger Creation Complete Feb 9 19:16:49.841440 env[1826]: time="2024-02-09T19:16:49.841358481Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:16:49.843540 amazon-ssm-agent[1783]: 2024/02/09 19:16:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:16:49.844633 amazon-ssm-agent[1783]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:16:49.848884 amazon-ssm-agent[1783]: 2024/02/09 19:16:49 processing appconfig overrides Feb 9 19:16:49.871493 update_engine[1807]: I0209 19:16:49.871090 1807 main.cc:92] Flatcar Update Engine starting Feb 9 19:16:49.876063 systemd[1]: Started update-engine.service. Feb 9 19:16:49.914980 update_engine[1807]: I0209 19:16:49.876160 1807 update_check_scheduler.cc:74] Next update check in 5m10s Feb 9 19:16:49.880902 systemd[1]: Started locksmithd.service. Feb 9 19:16:49.912626 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:16:49.988199 env[1826]: time="2024-02-09T19:16:49.988124262Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:16:49.989435 env[1826]: time="2024-02-09T19:16:49.988379796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:16:49.990837 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:16:49.996085 env[1826]: time="2024-02-09T19:16:49.996004204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:16:49.996085 env[1826]: time="2024-02-09T19:16:49.996073754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.098180836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.098246722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.098282801Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.098308343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.098499608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.098977819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.099274814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.099309207Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.099417527Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:16:50.099902 env[1826]: time="2024-02-09T19:16:50.099444280Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:16:50.105252 extend-filesystems[1842]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:16:50.105252 extend-filesystems[1842]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:16:50.105252 extend-filesystems[1842]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:16:50.132039 extend-filesystems[1791]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:16:50.110802 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:16:50.111347 systemd[1]: Finished extend-filesystems.service. Feb 9 19:16:50.139559 bash[1873]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:16:50.141247 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:16:50.151942 env[1826]: time="2024-02-09T19:16:50.151801123Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:16:50.151942 env[1826]: time="2024-02-09T19:16:50.151898122Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:16:50.151942 env[1826]: time="2024-02-09T19:16:50.151934914Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:16:50.152378 env[1826]: time="2024-02-09T19:16:50.151998863Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.152378 env[1826]: time="2024-02-09T19:16:50.152346372Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.152528 env[1826]: time="2024-02-09T19:16:50.152383698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.152528 env[1826]: time="2024-02-09T19:16:50.152417603Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.153239 env[1826]: time="2024-02-09T19:16:50.152945187Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.153239 env[1826]: time="2024-02-09T19:16:50.152993442Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.153239 env[1826]: time="2024-02-09T19:16:50.153027538Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.153239 env[1826]: time="2024-02-09T19:16:50.153061989Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.153239 env[1826]: time="2024-02-09T19:16:50.153092461Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:16:50.153239 env[1826]: time="2024-02-09T19:16:50.153298386Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:16:50.153239 env[1826]: time="2024-02-09T19:16:50.153503016Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:16:50.159202 env[1826]: time="2024-02-09T19:16:50.159121654Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:16:50.159361 env[1826]: time="2024-02-09T19:16:50.159205810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.159361 env[1826]: time="2024-02-09T19:16:50.159241260Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:16:50.159479 env[1826]: time="2024-02-09T19:16:50.159361555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162234680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162308941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162347111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162379294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162409730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162439513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162470935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162506978Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162787330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162857065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162890578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162919470Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162950702Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:16:50.165775 env[1826]: time="2024-02-09T19:16:50.162978346Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:16:50.165566 systemd[1]: Started containerd.service. Feb 9 19:16:50.174475 env[1826]: time="2024-02-09T19:16:50.163012703Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:16:50.174475 env[1826]: time="2024-02-09T19:16:50.163077958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:16:50.174625 env[1826]: time="2024-02-09T19:16:50.163429103Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:16:50.174625 env[1826]: time="2024-02-09T19:16:50.163558878Z" level=info msg="Connect containerd service" Feb 9 19:16:50.174625 env[1826]: time="2024-02-09T19:16:50.163626046Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:16:50.174625 env[1826]: time="2024-02-09T19:16:50.164553291Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:16:50.174625 env[1826]: time="2024-02-09T19:16:50.165173240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:16:50.174625 env[1826]: time="2024-02-09T19:16:50.165297788Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:16:50.174625 env[1826]: time="2024-02-09T19:16:50.165397769Z" level=info msg="containerd successfully booted in 0.325756s" Feb 9 19:16:50.182597 env[1826]: time="2024-02-09T19:16:50.178623134Z" level=info msg="Start subscribing containerd event" Feb 9 19:16:50.182597 env[1826]: time="2024-02-09T19:16:50.178715678Z" level=info msg="Start recovering state" Feb 9 19:16:50.182597 env[1826]: time="2024-02-09T19:16:50.178869475Z" level=info msg="Start event monitor" Feb 9 19:16:50.182597 env[1826]: time="2024-02-09T19:16:50.178908630Z" level=info msg="Start snapshots syncer" Feb 9 19:16:50.182597 env[1826]: time="2024-02-09T19:16:50.178932877Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:16:50.182597 env[1826]: time="2024-02-09T19:16:50.178952776Z" level=info msg="Start streaming server" Feb 9 19:16:50.214413 tar[1820]: ./static Feb 9 19:16:50.245223 systemd-logind[1804]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 19:16:50.246446 systemd-logind[1804]: New seat seat0. Feb 9 19:16:50.255408 systemd[1]: Started systemd-logind.service. Feb 9 19:16:50.321754 dbus-daemon[1787]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:16:50.322042 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:16:50.325731 dbus-daemon[1787]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1821 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:16:50.331019 systemd[1]: Starting polkit.service... Feb 9 19:16:50.368467 polkitd[1964]: Started polkitd version 121 Feb 9 19:16:50.420738 polkitd[1964]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:16:50.420882 polkitd[1964]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:16:50.432620 polkitd[1964]: Finished loading, compiling and executing 2 rules Feb 9 19:16:50.437081 dbus-daemon[1787]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:16:50.437344 systemd[1]: Started polkit.service. Feb 9 19:16:50.441934 polkitd[1964]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:16:50.447504 tar[1820]: ./vlan Feb 9 19:16:50.471773 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:16:50.473305 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:16:50.509262 systemd-hostnamed[1821]: Hostname set to (transient) Feb 9 19:16:50.509432 systemd-resolved[1750]: System hostname changed to 'ip-172-31-24-80'. Feb 9 19:16:50.588261 coreos-metadata[1786]: Feb 09 19:16:50.587 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:16:50.594317 coreos-metadata[1786]: Feb 09 19:16:50.594 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:16:50.609005 coreos-metadata[1786]: Feb 09 19:16:50.607 INFO Fetch successful Feb 9 19:16:50.609523 coreos-metadata[1786]: Feb 09 19:16:50.609 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:16:50.611988 coreos-metadata[1786]: Feb 09 19:16:50.611 INFO Fetch successful Feb 9 19:16:50.617377 unknown[1786]: wrote ssh authorized keys file for user: core Feb 9 19:16:50.641548 update-ssh-keys[1988]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:16:50.642652 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:16:50.720628 tar[1820]: ./portmap Feb 9 19:16:50.871162 tar[1820]: ./host-local Feb 9 19:16:50.877455 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Create new startup processor Feb 9 19:16:50.878077 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:16:50.878077 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing bookkeeping folders Feb 9 19:16:50.878077 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO removing the completed state files Feb 9 19:16:50.878077 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:16:50.878077 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:16:50.881966 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing healthcheck folders for long running plugins Feb 9 19:16:50.881966 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing locations for inventory plugin Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing default location for custom inventory Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing default location for file inventory Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Initializing default location for role inventory Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Init the cloudwatchlogs publisher Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:16:50.882164 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:16:50.882566 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO OS: linux, Arch: arm64 Feb 9 19:16:50.883811 amazon-ssm-agent[1783]: datastore file /var/lib/amazon/ssm/i-0b19c1ddc57c1ec3b/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:16:50.886993 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:16:50.983165 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:16:51.022679 tar[1820]: ./vrf Feb 9 19:16:51.078177 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:16:51.154257 tar[1820]: ./bridge Feb 9 19:16:51.172872 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:16:51.267650 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:16:51.295134 tar[1820]: ./tuning Feb 9 19:16:51.362893 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [instanceID=i-0b19c1ddc57c1ec3b] Starting association polling Feb 9 19:16:51.419835 tar[1820]: ./firewall Feb 9 19:16:51.458009 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:16:51.545785 tar[1820]: ./host-device Feb 9 19:16:51.553313 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:16:51.648867 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:16:51.663162 systemd[1]: Finished prepare-critools.service. Feb 9 19:16:51.681373 tar[1820]: ./sbr Feb 9 19:16:51.744553 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:16:51.760057 tar[1820]: ./loopback Feb 9 19:16:51.782262 tar[1818]: linux-arm64/LICENSE Feb 9 19:16:51.782262 tar[1818]: linux-arm64/README.md Feb 9 19:16:51.804776 systemd[1]: Finished prepare-helm.service. Feb 9 19:16:51.834526 tar[1820]: ./dhcp Feb 9 19:16:51.840574 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:16:51.936703 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:16:51.950614 tar[1820]: ./ptp Feb 9 19:16:52.001248 tar[1820]: ./ipvlan Feb 9 19:16:52.033009 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:16:52.050302 tar[1820]: ./bandwidth Feb 9 19:16:52.120389 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:16:52.129517 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:16:52.167382 locksmithd[1890]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:16:52.226226 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0b19c1ddc57c1ec3b, requestId: 75791518-4ac7-4e44-9c64-f44d6073aeb2 Feb 9 19:16:52.323041 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [OfflineService] Starting document processing engine... Feb 9 19:16:52.420103 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:16:52.517263 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:16:52.614662 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [OfflineService] Starting message polling Feb 9 19:16:52.712294 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [OfflineService] Starting send replies to MDS Feb 9 19:16:52.810053 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:16:52.908030 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:16:53.006409 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:16:53.104784 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessageGatewayService] listening reply. Feb 9 19:16:53.203343 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:16:53.302197 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:16:53.401112 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:16:53.500295 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:16:53.599718 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:16:53.699253 amazon-ssm-agent[1783]: 2024-02-09 19:16:50 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b19c1ddc57c1ec3b?role=subscribe&stream=input Feb 9 19:16:53.799028 amazon-ssm-agent[1783]: 2024-02-09 19:16:51 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b19c1ddc57c1ec3b?role=subscribe&stream=input Feb 9 19:16:53.899027 amazon-ssm-agent[1783]: 2024-02-09 19:16:51 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:16:53.999152 amazon-ssm-agent[1783]: 2024-02-09 19:16:51 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:16:58.105580 systemd[1]: Created slice system-sshd.slice. Feb 9 19:16:58.395051 sshd_keygen[1844]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:16:58.430489 systemd[1]: Finished sshd-keygen.service. Feb 9 19:16:58.435767 systemd[1]: Starting issuegen.service... Feb 9 19:16:58.440012 systemd[1]: Started sshd@0-172.31.24.80:22-147.75.109.163:40544.service. Feb 9 19:16:58.452502 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:16:58.453065 systemd[1]: Finished issuegen.service. Feb 9 19:16:58.458154 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:16:58.475939 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:16:58.480560 systemd[1]: Started getty@tty1.service. Feb 9 19:16:58.487190 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:16:58.490138 systemd[1]: Reached target getty.target. Feb 9 19:16:58.492620 systemd[1]: Reached target multi-user.target. Feb 9 19:16:58.497794 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:16:58.512148 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:16:58.512679 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:16:58.515917 systemd[1]: Startup finished in 13.219s (kernel) + 16.836s (userspace) = 30.056s. Feb 9 19:16:58.634765 sshd[2021]: Accepted publickey for core from 147.75.109.163 port 40544 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:58.639347 sshd[2021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:58.655484 systemd[1]: Created slice user-500.slice. Feb 9 19:16:58.658379 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:16:58.664345 systemd-logind[1804]: New session 1 of user core. Feb 9 19:16:58.677245 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:16:58.679930 systemd[1]: Starting user@500.service... Feb 9 19:16:58.688495 (systemd)[2035]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:58.860784 systemd[2035]: Queued start job for default target default.target. Feb 9 19:16:58.861224 systemd[2035]: Reached target paths.target. Feb 9 19:16:58.861262 systemd[2035]: Reached target sockets.target. Feb 9 19:16:58.861294 systemd[2035]: Reached target timers.target. Feb 9 19:16:58.861322 systemd[2035]: Reached target basic.target. Feb 9 19:16:58.861413 systemd[2035]: Reached target default.target. Feb 9 19:16:58.861474 systemd[2035]: Startup finished in 161ms. Feb 9 19:16:58.862895 systemd[1]: Started user@500.service. Feb 9 19:16:58.864798 systemd[1]: Started session-1.scope. Feb 9 19:16:59.008469 systemd[1]: Started sshd@1-172.31.24.80:22-147.75.109.163:40552.service. Feb 9 19:16:59.182131 sshd[2044]: Accepted publickey for core from 147.75.109.163 port 40552 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:59.183145 sshd[2044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:59.190942 systemd-logind[1804]: New session 2 of user core. Feb 9 19:16:59.191863 systemd[1]: Started session-2.scope. Feb 9 19:16:59.324193 sshd[2044]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:59.329163 systemd[1]: sshd@1-172.31.24.80:22-147.75.109.163:40552.service: Deactivated successfully. Feb 9 19:16:59.330537 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:16:59.333241 systemd-logind[1804]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:16:59.335892 systemd-logind[1804]: Removed session 2. Feb 9 19:16:59.350093 systemd[1]: Started sshd@2-172.31.24.80:22-147.75.109.163:40556.service. Feb 9 19:16:59.520236 sshd[2051]: Accepted publickey for core from 147.75.109.163 port 40556 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:59.523144 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:59.529902 systemd-logind[1804]: New session 3 of user core. Feb 9 19:16:59.531535 systemd[1]: Started session-3.scope. Feb 9 19:16:59.665117 sshd[2051]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:59.670924 systemd-logind[1804]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:16:59.672536 systemd[1]: sshd@2-172.31.24.80:22-147.75.109.163:40556.service: Deactivated successfully. Feb 9 19:16:59.673993 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:16:59.676705 systemd-logind[1804]: Removed session 3. Feb 9 19:16:59.689512 systemd[1]: Started sshd@3-172.31.24.80:22-147.75.109.163:40572.service. Feb 9 19:16:59.855039 sshd[2058]: Accepted publickey for core from 147.75.109.163 port 40572 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:59.857933 sshd[2058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:59.866261 systemd[1]: Started session-4.scope. Feb 9 19:16:59.866903 systemd-logind[1804]: New session 4 of user core. Feb 9 19:16:59.997273 sshd[2058]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:00.002310 systemd[1]: sshd@3-172.31.24.80:22-147.75.109.163:40572.service: Deactivated successfully. Feb 9 19:17:00.003674 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:17:00.006211 systemd-logind[1804]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:17:00.008870 systemd-logind[1804]: Removed session 4. Feb 9 19:17:00.022955 systemd[1]: Started sshd@4-172.31.24.80:22-147.75.109.163:40586.service. Feb 9 19:17:00.190302 sshd[2065]: Accepted publickey for core from 147.75.109.163 port 40586 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:00.193156 sshd[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:00.201539 systemd[1]: Started session-5.scope. Feb 9 19:17:00.202715 systemd-logind[1804]: New session 5 of user core. Feb 9 19:17:00.321416 sudo[2069]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:17:00.323080 sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:17:01.002192 systemd[1]: Starting docker.service... Feb 9 19:17:01.075601 env[2085]: time="2024-02-09T19:17:01.075531370Z" level=info msg="Starting up" Feb 9 19:17:01.081298 env[2085]: time="2024-02-09T19:17:01.081242942Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:17:01.081298 env[2085]: time="2024-02-09T19:17:01.081286653Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:17:01.081491 env[2085]: time="2024-02-09T19:17:01.081330938Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:17:01.081491 env[2085]: time="2024-02-09T19:17:01.081354763Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:17:01.084960 env[2085]: time="2024-02-09T19:17:01.084918557Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:17:01.085159 env[2085]: time="2024-02-09T19:17:01.085130899Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:17:01.085286 env[2085]: time="2024-02-09T19:17:01.085252358Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:17:01.085392 env[2085]: time="2024-02-09T19:17:01.085365377Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:17:01.095513 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1517502988-merged.mount: Deactivated successfully. Feb 9 19:17:01.515219 env[2085]: time="2024-02-09T19:17:01.515156188Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:17:01.515219 env[2085]: time="2024-02-09T19:17:01.515200270Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:17:01.515561 env[2085]: time="2024-02-09T19:17:01.515452359Z" level=info msg="Loading containers: start." Feb 9 19:17:01.680873 kernel: Initializing XFRM netlink socket Feb 9 19:17:01.722233 env[2085]: time="2024-02-09T19:17:01.722188891Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:17:01.724855 (udev-worker)[2096]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:17:01.810305 systemd-networkd[1600]: docker0: Link UP Feb 9 19:17:01.827460 env[2085]: time="2024-02-09T19:17:01.827395612Z" level=info msg="Loading containers: done." Feb 9 19:17:01.852312 env[2085]: time="2024-02-09T19:17:01.852256496Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:17:01.852842 env[2085]: time="2024-02-09T19:17:01.852795718Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:17:01.853122 env[2085]: time="2024-02-09T19:17:01.853095852Z" level=info msg="Daemon has completed initialization" Feb 9 19:17:01.878375 systemd[1]: Started docker.service. Feb 9 19:17:01.894064 env[2085]: time="2024-02-09T19:17:01.893998113Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:17:01.925873 systemd[1]: Reloading. Feb 9 19:17:02.023755 /usr/lib/systemd/system-generators/torcx-generator[2222]: time="2024-02-09T19:17:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:17:02.030897 /usr/lib/systemd/system-generators/torcx-generator[2222]: time="2024-02-09T19:17:02Z" level=info msg="torcx already run" Feb 9 19:17:02.206089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:17:02.206789 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:17:02.245143 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:17:02.455062 systemd[1]: Started kubelet.service. Feb 9 19:17:02.598476 kubelet[2283]: E0209 19:17:02.597996 2283 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:17:02.606610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:17:02.607056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:17:02.983991 env[1826]: time="2024-02-09T19:17:02.983861652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:17:03.611230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995599765.mount: Deactivated successfully. Feb 9 19:17:05.885743 env[1826]: time="2024-02-09T19:17:05.885671133Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:05.888697 env[1826]: time="2024-02-09T19:17:05.888634867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:05.892001 env[1826]: time="2024-02-09T19:17:05.891948001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:05.895162 env[1826]: time="2024-02-09T19:17:05.895115615Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:05.896781 env[1826]: time="2024-02-09T19:17:05.896734732Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 19:17:05.914457 env[1826]: time="2024-02-09T19:17:05.914390191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:17:08.340928 env[1826]: time="2024-02-09T19:17:08.340847222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:08.344325 env[1826]: time="2024-02-09T19:17:08.344262652Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:08.347892 env[1826]: time="2024-02-09T19:17:08.347840738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:08.351001 env[1826]: time="2024-02-09T19:17:08.350940026Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:08.352717 env[1826]: time="2024-02-09T19:17:08.352671833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 19:17:08.370025 env[1826]: time="2024-02-09T19:17:08.369956009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:17:09.807939 env[1826]: time="2024-02-09T19:17:09.807882756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:09.811851 env[1826]: time="2024-02-09T19:17:09.811771334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:09.815076 env[1826]: time="2024-02-09T19:17:09.815014307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:09.818497 env[1826]: time="2024-02-09T19:17:09.818446982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:09.820181 env[1826]: time="2024-02-09T19:17:09.820132941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 19:17:09.836556 env[1826]: time="2024-02-09T19:17:09.836485231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:17:11.136441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737127845.mount: Deactivated successfully. Feb 9 19:17:11.853480 env[1826]: time="2024-02-09T19:17:11.853420418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:11.857726 env[1826]: time="2024-02-09T19:17:11.857655816Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:11.861442 env[1826]: time="2024-02-09T19:17:11.861377853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:11.864946 env[1826]: time="2024-02-09T19:17:11.864898257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:11.867621 env[1826]: time="2024-02-09T19:17:11.866585549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 19:17:11.884218 env[1826]: time="2024-02-09T19:17:11.884171546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:17:12.425063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322951453.mount: Deactivated successfully. Feb 9 19:17:12.430454 env[1826]: time="2024-02-09T19:17:12.430402841Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:12.433102 env[1826]: time="2024-02-09T19:17:12.433057704Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:12.435600 env[1826]: time="2024-02-09T19:17:12.435556625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:12.437960 env[1826]: time="2024-02-09T19:17:12.437901116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:12.439347 env[1826]: time="2024-02-09T19:17:12.439295784Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 19:17:12.455687 env[1826]: time="2024-02-09T19:17:12.455627789Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:17:12.858498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:17:12.858853 systemd[1]: Stopped kubelet.service. Feb 9 19:17:12.861853 systemd[1]: Started kubelet.service. Feb 9 19:17:12.959135 kubelet[2326]: E0209 19:17:12.958970 2326 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:17:12.973387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:17:12.973791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:17:13.642505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount429268384.mount: Deactivated successfully. Feb 9 19:17:13.698474 amazon-ssm-agent[1783]: 2024-02-09 19:17:13 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:17:17.425964 env[1826]: time="2024-02-09T19:17:17.425891975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:17.431107 env[1826]: time="2024-02-09T19:17:17.431058897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:17.434229 env[1826]: time="2024-02-09T19:17:17.434162704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:17.437127 env[1826]: time="2024-02-09T19:17:17.437078486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:17.438608 env[1826]: time="2024-02-09T19:17:17.438562866Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 19:17:17.455703 env[1826]: time="2024-02-09T19:17:17.455638705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:17:18.064752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256380680.mount: Deactivated successfully. Feb 9 19:17:18.755569 env[1826]: time="2024-02-09T19:17:18.755509929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.758655 env[1826]: time="2024-02-09T19:17:18.758585769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.763280 env[1826]: time="2024-02-09T19:17:18.763204632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.767389 env[1826]: time="2024-02-09T19:17:18.767325087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:18.768792 env[1826]: time="2024-02-09T19:17:18.768726977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 19:17:20.542242 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:17:23.191368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:17:23.191693 systemd[1]: Stopped kubelet.service. Feb 9 19:17:23.196024 systemd[1]: Started kubelet.service. Feb 9 19:17:23.300485 kubelet[2395]: E0209 19:17:23.300404 2395 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:17:23.304394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:17:23.304778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:17:24.834759 systemd[1]: Stopped kubelet.service. Feb 9 19:17:24.864995 systemd[1]: Reloading. Feb 9 19:17:24.988090 /usr/lib/systemd/system-generators/torcx-generator[2425]: time="2024-02-09T19:17:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:17:24.988155 /usr/lib/systemd/system-generators/torcx-generator[2425]: time="2024-02-09T19:17:24Z" level=info msg="torcx already run" Feb 9 19:17:25.155229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:17:25.155270 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:17:25.193334 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:17:25.388172 systemd[1]: Started kubelet.service. Feb 9 19:17:25.487273 kubelet[2487]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:17:25.487914 kubelet[2487]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:17:25.488277 kubelet[2487]: I0209 19:17:25.488200 2487 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:17:25.493711 kubelet[2487]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:17:25.493711 kubelet[2487]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:17:26.285958 kubelet[2487]: I0209 19:17:26.285901 2487 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:17:26.285958 kubelet[2487]: I0209 19:17:26.285948 2487 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:17:26.286430 kubelet[2487]: I0209 19:17:26.286388 2487 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:17:26.293911 kubelet[2487]: E0209 19:17:26.293858 2487 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.294174 kubelet[2487]: I0209 19:17:26.294148 2487 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:17:26.295482 kubelet[2487]: W0209 19:17:26.295441 2487 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:17:26.296770 kubelet[2487]: I0209 19:17:26.296734 2487 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:17:26.297522 kubelet[2487]: I0209 19:17:26.297495 2487 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:17:26.297645 kubelet[2487]: I0209 19:17:26.297615 2487 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:17:26.297864 kubelet[2487]: I0209 19:17:26.297686 2487 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:17:26.297864 kubelet[2487]: I0209 19:17:26.297714 2487 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:17:26.298022 kubelet[2487]: I0209 19:17:26.297963 2487 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:17:26.304206 kubelet[2487]: I0209 19:17:26.304152 2487 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:17:26.304206 kubelet[2487]: I0209 19:17:26.304202 2487 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:17:26.304446 kubelet[2487]: I0209 19:17:26.304293 2487 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:17:26.304446 kubelet[2487]: I0209 19:17:26.304317 2487 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:17:26.308126 kubelet[2487]: I0209 19:17:26.308089 2487 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:17:26.309580 kubelet[2487]: W0209 19:17:26.309538 2487 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:17:26.311054 kubelet[2487]: I0209 19:17:26.311002 2487 server.go:1186] "Started kubelet" Feb 9 19:17:26.312961 kubelet[2487]: W0209 19:17:26.312882 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.24.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.313170 kubelet[2487]: E0209 19:17:26.313149 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.317002 kubelet[2487]: W0209 19:17:26.316910 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.24.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-80&limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.317172 kubelet[2487]: E0209 19:17:26.317011 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-80&limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.322421 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:17:26.323431 kubelet[2487]: I0209 19:17:26.323395 2487 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:17:26.323688 kubelet[2487]: E0209 19:17:26.323640 2487 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:17:26.323770 kubelet[2487]: E0209 19:17:26.323693 2487 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:17:26.324001 kubelet[2487]: E0209 19:17:26.323848 2487 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-24-80.17b247e25f2014fb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-24-80", UID:"ip-172-31-24-80", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-24-80"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 26, 310958331, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 26, 310958331, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.24.80:6443/api/v1/namespaces/default/events": dial tcp 172.31.24.80:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:17:26.327519 kubelet[2487]: I0209 19:17:26.327460 2487 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:17:26.328604 kubelet[2487]: I0209 19:17:26.328554 2487 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:17:26.328944 kubelet[2487]: I0209 19:17:26.328907 2487 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:17:26.329594 kubelet[2487]: I0209 19:17:26.329559 2487 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:17:26.334437 kubelet[2487]: W0209 19:17:26.332712 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.24.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.334437 kubelet[2487]: E0209 19:17:26.332795 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.334437 kubelet[2487]: E0209 19:17:26.332925 2487 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.433402 kubelet[2487]: I0209 19:17:26.433343 2487 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-80" Feb 9 19:17:26.438134 kubelet[2487]: E0209 19:17:26.438081 2487 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Feb 9 19:17:26.441478 kubelet[2487]: I0209 19:17:26.441355 2487 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:17:26.441478 kubelet[2487]: I0209 19:17:26.441471 2487 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:17:26.441724 kubelet[2487]: I0209 19:17:26.441503 2487 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:17:26.444003 kubelet[2487]: I0209 19:17:26.443953 2487 policy_none.go:49] "None policy: Start" Feb 9 19:17:26.445453 kubelet[2487]: I0209 19:17:26.445411 2487 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:17:26.445453 kubelet[2487]: I0209 19:17:26.445460 2487 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:17:26.460141 kubelet[2487]: I0209 19:17:26.460091 2487 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:17:26.460478 kubelet[2487]: I0209 19:17:26.460443 2487 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:17:26.465751 kubelet[2487]: E0209 19:17:26.465703 2487 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-80\" not found" Feb 9 19:17:26.473132 kubelet[2487]: I0209 19:17:26.473083 2487 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:17:26.518066 kubelet[2487]: I0209 19:17:26.518004 2487 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:17:26.518755 kubelet[2487]: I0209 19:17:26.518733 2487 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:17:26.519165 kubelet[2487]: I0209 19:17:26.519142 2487 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:17:26.519457 kubelet[2487]: E0209 19:17:26.519425 2487 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:17:26.520912 kubelet[2487]: W0209 19:17:26.520555 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.24.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.520912 kubelet[2487]: E0209 19:17:26.520693 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.533636 kubelet[2487]: E0209 19:17:26.533589 2487 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.621226 kubelet[2487]: I0209 19:17:26.620063 2487 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:26.624404 kubelet[2487]: I0209 19:17:26.624370 2487 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:26.634013 kubelet[2487]: I0209 19:17:26.633962 2487 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:26.641120 kubelet[2487]: I0209 19:17:26.641072 2487 status_manager.go:698] "Failed to get status for pod" podUID=a9c647c7e3c4f500b76ee0b765b90cda pod="kube-system/kube-apiserver-ip-172-31-24-80" err="Get \"https://172.31.24.80:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-24-80\": dial tcp 172.31.24.80:6443: connect: connection refused" Feb 9 19:17:26.646013 kubelet[2487]: I0209 19:17:26.645979 2487 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-80" Feb 9 19:17:26.656427 kubelet[2487]: I0209 19:17:26.647980 2487 status_manager.go:698] "Failed to get status for pod" podUID=5023ed270fa59707e5b7bee91e3827fd pod="kube-system/kube-controller-manager-ip-172-31-24-80" err="Get \"https://172.31.24.80:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-24-80\": dial tcp 172.31.24.80:6443: connect: connection refused" Feb 9 19:17:26.657007 kubelet[2487]: E0209 19:17:26.656976 2487 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Feb 9 19:17:26.657267 kubelet[2487]: I0209 19:17:26.657245 2487 status_manager.go:698] "Failed to get status for pod" podUID=9ab1e6f5ecb103071c59984ca62c9915 pod="kube-system/kube-scheduler-ip-172-31-24-80" err="Get \"https://172.31.24.80:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-24-80\": dial tcp 172.31.24.80:6443: connect: connection refused" Feb 9 19:17:26.733594 kubelet[2487]: I0209 19:17:26.733532 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:26.733788 kubelet[2487]: I0209 19:17:26.733604 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:26.733788 kubelet[2487]: I0209 19:17:26.733651 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9c647c7e3c4f500b76ee0b765b90cda-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a9c647c7e3c4f500b76ee0b765b90cda\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Feb 9 19:17:26.733788 kubelet[2487]: I0209 19:17:26.733701 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9c647c7e3c4f500b76ee0b765b90cda-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a9c647c7e3c4f500b76ee0b765b90cda\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Feb 9 19:17:26.733788 kubelet[2487]: I0209 19:17:26.733750 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:26.734056 kubelet[2487]: I0209 19:17:26.733833 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:26.734056 kubelet[2487]: I0209 19:17:26.733887 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:26.734056 kubelet[2487]: I0209 19:17:26.733933 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ab1e6f5ecb103071c59984ca62c9915-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-80\" (UID: \"9ab1e6f5ecb103071c59984ca62c9915\") " pod="kube-system/kube-scheduler-ip-172-31-24-80" Feb 9 19:17:26.734056 kubelet[2487]: I0209 19:17:26.733978 2487 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9c647c7e3c4f500b76ee0b765b90cda-ca-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a9c647c7e3c4f500b76ee0b765b90cda\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Feb 9 19:17:26.934959 kubelet[2487]: E0209 19:17:26.934789 2487 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:26.944423 env[1826]: time="2024-02-09T19:17:26.944147561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-80,Uid:a9c647c7e3c4f500b76ee0b765b90cda,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:26.957479 env[1826]: time="2024-02-09T19:17:26.956883601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-80,Uid:5023ed270fa59707e5b7bee91e3827fd,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:26.958177 env[1826]: time="2024-02-09T19:17:26.958067402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-80,Uid:9ab1e6f5ecb103071c59984ca62c9915,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:27.059898 kubelet[2487]: I0209 19:17:27.059340 2487 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-80" Feb 9 19:17:27.059898 kubelet[2487]: E0209 19:17:27.059866 2487 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Feb 9 19:17:27.118498 kubelet[2487]: E0209 19:17:27.118354 2487 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-24-80.17b247e25f2014fb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-24-80", UID:"ip-172-31-24-80", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-24-80"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 17, 26, 310958331, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 17, 26, 310958331, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.24.80:6443/api/v1/namespaces/default/events": dial tcp 172.31.24.80:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:17:27.301479 kubelet[2487]: W0209 19:17:27.301358 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.24.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-80&limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.301479 kubelet[2487]: E0209 19:17:27.301446 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-80&limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.367216 kubelet[2487]: W0209 19:17:27.367127 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.24.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.367390 kubelet[2487]: E0209 19:17:27.367222 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.443024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717474968.mount: Deactivated successfully. Feb 9 19:17:27.453296 env[1826]: time="2024-02-09T19:17:27.453243414Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.456898 env[1826]: time="2024-02-09T19:17:27.456850663Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.459075 env[1826]: time="2024-02-09T19:17:27.459030781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.460609 env[1826]: time="2024-02-09T19:17:27.460559024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.462185 env[1826]: time="2024-02-09T19:17:27.462142717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.464561 env[1826]: time="2024-02-09T19:17:27.464491059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.468132 env[1826]: time="2024-02-09T19:17:27.468085879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.473718 env[1826]: time="2024-02-09T19:17:27.473656734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.479861 env[1826]: time="2024-02-09T19:17:27.479784520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.481438 env[1826]: time="2024-02-09T19:17:27.481394295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.483215 env[1826]: time="2024-02-09T19:17:27.483171898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.484998 env[1826]: time="2024-02-09T19:17:27.484952992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:17:27.525150 env[1826]: time="2024-02-09T19:17:27.523062307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:27.525150 env[1826]: time="2024-02-09T19:17:27.523165854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:27.525150 env[1826]: time="2024-02-09T19:17:27.523219038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:27.525150 env[1826]: time="2024-02-09T19:17:27.524288090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/018822abfcd92f7d3354be909fcf607caa77a134081bda87c2f9e088425b8cf8 pid=2564 runtime=io.containerd.runc.v2 Feb 9 19:17:27.587642 env[1826]: time="2024-02-09T19:17:27.586241773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:27.587642 env[1826]: time="2024-02-09T19:17:27.586318723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:27.587642 env[1826]: time="2024-02-09T19:17:27.586344612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:27.588209 env[1826]: time="2024-02-09T19:17:27.588095342Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1c06408ebf498a1a270ebfa247dcd07fbb3b1e765cafa8e66468c82ab9122b1 pid=2592 runtime=io.containerd.runc.v2 Feb 9 19:17:27.593713 env[1826]: time="2024-02-09T19:17:27.593579146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:27.593972 env[1826]: time="2024-02-09T19:17:27.593660042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:27.593972 env[1826]: time="2024-02-09T19:17:27.593687312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:27.597024 env[1826]: time="2024-02-09T19:17:27.596924365Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97a7bad4f38890bb5b256987945b6ce8d552dd94e8401086babf5d82f7921fae pid=2600 runtime=io.containerd.runc.v2 Feb 9 19:17:27.623867 kubelet[2487]: W0209 19:17:27.623665 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.24.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.623867 kubelet[2487]: E0209 19:17:27.623781 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.682090 kubelet[2487]: W0209 19:17:27.681923 2487 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.24.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.682090 kubelet[2487]: E0209 19:17:27.682013 2487 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.720917 env[1826]: time="2024-02-09T19:17:27.720860885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-80,Uid:9ab1e6f5ecb103071c59984ca62c9915,Namespace:kube-system,Attempt:0,} returns sandbox id \"018822abfcd92f7d3354be909fcf607caa77a134081bda87c2f9e088425b8cf8\"" Feb 9 19:17:27.726763 env[1826]: time="2024-02-09T19:17:27.726706210Z" level=info msg="CreateContainer within sandbox \"018822abfcd92f7d3354be909fcf607caa77a134081bda87c2f9e088425b8cf8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:17:27.736410 kubelet[2487]: E0209 19:17:27.736299 2487 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s": dial tcp 172.31.24.80:6443: connect: connection refused Feb 9 19:17:27.744512 env[1826]: time="2024-02-09T19:17:27.744430830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-80,Uid:5023ed270fa59707e5b7bee91e3827fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1c06408ebf498a1a270ebfa247dcd07fbb3b1e765cafa8e66468c82ab9122b1\"" Feb 9 19:17:27.749293 env[1826]: time="2024-02-09T19:17:27.749215597Z" level=info msg="CreateContainer within sandbox \"c1c06408ebf498a1a270ebfa247dcd07fbb3b1e765cafa8e66468c82ab9122b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:17:27.768694 env[1826]: time="2024-02-09T19:17:27.768630829Z" level=info msg="CreateContainer within sandbox \"018822abfcd92f7d3354be909fcf607caa77a134081bda87c2f9e088425b8cf8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"feb8cf7704b1f9c4bf92cc50cd777759330b40b4ff00e93b327daef4cc0c5bee\"" Feb 9 19:17:27.770463 env[1826]: time="2024-02-09T19:17:27.770405744Z" level=info msg="StartContainer for \"feb8cf7704b1f9c4bf92cc50cd777759330b40b4ff00e93b327daef4cc0c5bee\"" Feb 9 19:17:27.776154 env[1826]: time="2024-02-09T19:17:27.776086649Z" level=info msg="CreateContainer within sandbox \"c1c06408ebf498a1a270ebfa247dcd07fbb3b1e765cafa8e66468c82ab9122b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22f04d62905e1e369dfc4a1d0a695dff3d24c76364d3c11e1b0683a339f4ac25\"" Feb 9 19:17:27.777259 env[1826]: time="2024-02-09T19:17:27.777210936Z" level=info msg="StartContainer for \"22f04d62905e1e369dfc4a1d0a695dff3d24c76364d3c11e1b0683a339f4ac25\"" Feb 9 19:17:27.781387 env[1826]: time="2024-02-09T19:17:27.781315838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-80,Uid:a9c647c7e3c4f500b76ee0b765b90cda,Namespace:kube-system,Attempt:0,} returns sandbox id \"97a7bad4f38890bb5b256987945b6ce8d552dd94e8401086babf5d82f7921fae\"" Feb 9 19:17:27.787269 env[1826]: time="2024-02-09T19:17:27.787214695Z" level=info msg="CreateContainer within sandbox \"97a7bad4f38890bb5b256987945b6ce8d552dd94e8401086babf5d82f7921fae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:17:27.824222 env[1826]: time="2024-02-09T19:17:27.824149791Z" level=info msg="CreateContainer within sandbox \"97a7bad4f38890bb5b256987945b6ce8d552dd94e8401086babf5d82f7921fae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17db9ac9c0f7c75312a27c6fe12edfb0709329e837e5d202ca038ccde419a035\"" Feb 9 19:17:27.825201 env[1826]: time="2024-02-09T19:17:27.825148360Z" level=info msg="StartContainer for \"17db9ac9c0f7c75312a27c6fe12edfb0709329e837e5d202ca038ccde419a035\"" Feb 9 19:17:27.864313 kubelet[2487]: I0209 19:17:27.862388 2487 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-80" Feb 9 19:17:27.864313 kubelet[2487]: E0209 19:17:27.862909 2487 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Feb 9 19:17:27.982956 env[1826]: time="2024-02-09T19:17:27.982878335Z" level=info msg="StartContainer for \"22f04d62905e1e369dfc4a1d0a695dff3d24c76364d3c11e1b0683a339f4ac25\" returns successfully" Feb 9 19:17:27.997763 env[1826]: time="2024-02-09T19:17:27.997678594Z" level=info msg="StartContainer for \"feb8cf7704b1f9c4bf92cc50cd777759330b40b4ff00e93b327daef4cc0c5bee\" returns successfully" Feb 9 19:17:28.119573 env[1826]: time="2024-02-09T19:17:28.119431972Z" level=info msg="StartContainer for \"17db9ac9c0f7c75312a27c6fe12edfb0709329e837e5d202ca038ccde419a035\" returns successfully" Feb 9 19:17:29.465032 kubelet[2487]: I0209 19:17:29.464999 2487 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-80" Feb 9 19:17:32.291706 kubelet[2487]: I0209 19:17:32.291668 2487 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-24-80" Feb 9 19:17:32.293024 kubelet[2487]: E0209 19:17:32.292970 2487 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Feb 9 19:17:32.311013 kubelet[2487]: I0209 19:17:32.310978 2487 apiserver.go:52] "Watching apiserver" Feb 9 19:17:32.430438 kubelet[2487]: I0209 19:17:32.430395 2487 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:17:32.469993 kubelet[2487]: I0209 19:17:32.469946 2487 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:17:35.069091 systemd[1]: Reloading. Feb 9 19:17:35.211702 /usr/lib/systemd/system-generators/torcx-generator[2815]: time="2024-02-09T19:17:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:17:35.217288 /usr/lib/systemd/system-generators/torcx-generator[2815]: time="2024-02-09T19:17:35Z" level=info msg="torcx already run" Feb 9 19:17:35.397402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:17:35.397917 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:17:35.438545 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:17:35.475377 update_engine[1807]: I0209 19:17:35.474871 1807 update_attempter.cc:509] Updating boot flags... Feb 9 19:17:35.684321 systemd[1]: Stopping kubelet.service... Feb 9 19:17:35.686000 kubelet[2487]: I0209 19:17:35.685468 2487 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:17:35.715195 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:17:35.715894 systemd[1]: Stopped kubelet.service. Feb 9 19:17:35.722709 systemd[1]: Started kubelet.service. Feb 9 19:17:35.873063 sudo[2973]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:17:35.873557 sudo[2973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:17:36.060105 kubelet[2943]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:17:36.060105 kubelet[2943]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:17:36.060105 kubelet[2943]: I0209 19:17:36.055280 2943 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:17:36.070491 kubelet[2943]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:17:36.070491 kubelet[2943]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:17:36.093936 kubelet[2943]: I0209 19:17:36.092731 2943 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:17:36.093936 kubelet[2943]: I0209 19:17:36.092776 2943 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:17:36.093936 kubelet[2943]: I0209 19:17:36.093210 2943 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:17:36.095699 kubelet[2943]: I0209 19:17:36.095652 2943 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:17:36.099616 kubelet[2943]: I0209 19:17:36.099580 2943 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:17:36.105966 kubelet[2943]: W0209 19:17:36.105921 2943 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:17:36.112450 kubelet[2943]: I0209 19:17:36.112376 2943 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:17:36.113489 kubelet[2943]: I0209 19:17:36.113453 2943 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:17:36.113624 kubelet[2943]: I0209 19:17:36.113581 2943 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:17:36.113780 kubelet[2943]: I0209 19:17:36.113626 2943 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:17:36.113780 kubelet[2943]: I0209 19:17:36.113651 2943 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:17:36.113780 kubelet[2943]: I0209 19:17:36.113695 2943 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:17:36.129007 kubelet[2943]: I0209 19:17:36.128963 2943 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:17:36.129007 kubelet[2943]: I0209 19:17:36.129007 2943 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:17:36.129235 kubelet[2943]: I0209 19:17:36.129054 2943 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:17:36.129235 kubelet[2943]: I0209 19:17:36.129081 2943 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:17:36.148252 kubelet[2943]: I0209 19:17:36.133271 2943 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:17:36.148252 kubelet[2943]: I0209 19:17:36.136005 2943 server.go:1186] "Started kubelet" Feb 9 19:17:36.148252 kubelet[2943]: I0209 19:17:36.145152 2943 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:17:36.148843 kubelet[2943]: I0209 19:17:36.148776 2943 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:17:36.166716 kubelet[2943]: I0209 19:17:36.166678 2943 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:17:36.240351 kubelet[2943]: I0209 19:17:36.166834 2943 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:17:36.240496 kubelet[2943]: E0209 19:17:36.182235 2943 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:17:36.240496 kubelet[2943]: E0209 19:17:36.240453 2943 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:17:36.242911 kubelet[2943]: I0209 19:17:36.242004 2943 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:17:36.408947 kubelet[2943]: E0209 19:17:36.407512 2943 container_manager_linux.go:945] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 9 19:17:36.441542 kubelet[2943]: I0209 19:17:36.438449 2943 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-80" Feb 9 19:17:36.474372 kubelet[2943]: I0209 19:17:36.474328 2943 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-24-80" Feb 9 19:17:36.474703 kubelet[2943]: I0209 19:17:36.474680 2943 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-24-80" Feb 9 19:17:36.778027 kubelet[2943]: I0209 19:17:36.777975 2943 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:17:37.077094 kubelet[2943]: I0209 19:17:37.076973 2943 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:17:37.077094 kubelet[2943]: I0209 19:17:37.077013 2943 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:17:37.077094 kubelet[2943]: I0209 19:17:37.077045 2943 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:17:37.077803 kubelet[2943]: I0209 19:17:37.077285 2943 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:17:37.077803 kubelet[2943]: I0209 19:17:37.077310 2943 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:17:37.077803 kubelet[2943]: I0209 19:17:37.077325 2943 policy_none.go:49] "None policy: Start" Feb 9 19:17:37.079153 kubelet[2943]: I0209 19:17:37.078469 2943 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:17:37.079153 kubelet[2943]: I0209 19:17:37.078507 2943 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:17:37.079153 kubelet[2943]: I0209 19:17:37.078536 2943 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:17:37.079153 kubelet[2943]: E0209 19:17:37.078635 2943 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:17:37.081012 kubelet[2943]: I0209 19:17:37.079873 2943 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:17:37.081012 kubelet[2943]: I0209 19:17:37.079925 2943 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:17:37.081012 kubelet[2943]: I0209 19:17:37.080153 2943 state_mem.go:75] "Updated machine memory state" Feb 9 19:17:37.082909 kubelet[2943]: I0209 19:17:37.082867 2943 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:17:37.089375 kubelet[2943]: I0209 19:17:37.088877 2943 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:17:37.150343 kubelet[2943]: I0209 19:17:37.150283 2943 apiserver.go:52] "Watching apiserver" Feb 9 19:17:37.178966 kubelet[2943]: I0209 19:17:37.178910 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:37.179122 kubelet[2943]: I0209 19:17:37.179110 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:37.179256 kubelet[2943]: I0209 19:17:37.179201 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:37.227930 kubelet[2943]: I0209 19:17:37.227653 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-80" podStartSLOduration=0.227597238 pod.CreationTimestamp="2024-02-09 19:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:17:37.226555247 +0000 UTC m=+1.485175347" watchObservedRunningTime="2024-02-09 19:17:37.227597238 +0000 UTC m=+1.486217338" Feb 9 19:17:37.242071 kubelet[2943]: I0209 19:17:37.242038 2943 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:17:37.278177 sudo[2973]: pam_unix(sudo:session): session closed for user root Feb 9 19:17:37.279250 kubelet[2943]: I0209 19:17:37.279216 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9c647c7e3c4f500b76ee0b765b90cda-ca-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a9c647c7e3c4f500b76ee0b765b90cda\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Feb 9 19:17:37.279521 kubelet[2943]: I0209 19:17:37.279499 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:37.280140 kubelet[2943]: I0209 19:17:37.280072 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:37.280445 kubelet[2943]: I0209 19:17:37.280420 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:37.282855 kubelet[2943]: I0209 19:17:37.282666 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ab1e6f5ecb103071c59984ca62c9915-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-80\" (UID: \"9ab1e6f5ecb103071c59984ca62c9915\") " pod="kube-system/kube-scheduler-ip-172-31-24-80" Feb 9 19:17:37.286093 kubelet[2943]: I0209 19:17:37.285983 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9c647c7e3c4f500b76ee0b765b90cda-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a9c647c7e3c4f500b76ee0b765b90cda\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Feb 9 19:17:37.286579 kubelet[2943]: I0209 19:17:37.286494 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9c647c7e3c4f500b76ee0b765b90cda-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a9c647c7e3c4f500b76ee0b765b90cda\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Feb 9 19:17:37.287275 kubelet[2943]: I0209 19:17:37.287183 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:37.287705 kubelet[2943]: I0209 19:17:37.287667 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5023ed270fa59707e5b7bee91e3827fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"5023ed270fa59707e5b7bee91e3827fd\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Feb 9 19:17:37.288015 kubelet[2943]: I0209 19:17:37.287988 2943 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:17:37.738379 kubelet[2943]: I0209 19:17:37.738334 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-80" podStartSLOduration=0.73827791 pod.CreationTimestamp="2024-02-09 19:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:17:37.337674182 +0000 UTC m=+1.596294282" watchObservedRunningTime="2024-02-09 19:17:37.73827791 +0000 UTC m=+1.996897998" Feb 9 19:17:38.141568 kubelet[2943]: I0209 19:17:38.141434 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-80" podStartSLOduration=1.141360226 pod.CreationTimestamp="2024-02-09 19:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:17:37.740187115 +0000 UTC m=+1.998807191" watchObservedRunningTime="2024-02-09 19:17:38.141360226 +0000 UTC m=+2.399980290" Feb 9 19:17:39.677329 sudo[2069]: pam_unix(sudo:session): session closed for user root Feb 9 19:17:39.700338 sshd[2065]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:39.705570 systemd-logind[1804]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:17:39.706238 systemd[1]: sshd@4-172.31.24.80:22-147.75.109.163:40586.service: Deactivated successfully. Feb 9 19:17:39.707679 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:17:39.710430 systemd-logind[1804]: Removed session 5. Feb 9 19:17:43.720512 amazon-ssm-agent[1783]: 2024-02-09 19:17:43 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:17:47.336733 kubelet[2943]: I0209 19:17:47.336701 2943 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:17:47.338231 env[1826]: time="2024-02-09T19:17:47.338143140Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:17:47.338907 kubelet[2943]: I0209 19:17:47.338725 2943 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:17:48.222323 kubelet[2943]: I0209 19:17:48.222263 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:48.244503 kubelet[2943]: I0209 19:17:48.244449 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:48.298280 kubelet[2943]: I0209 19:17:48.298226 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:48.355006 kubelet[2943]: I0209 19:17:48.354945 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hostproc\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.355605 kubelet[2943]: I0209 19:17:48.355025 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-etc-cni-netd\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.355605 kubelet[2943]: I0209 19:17:48.355076 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-xtables-lock\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.355605 kubelet[2943]: I0209 19:17:48.355128 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-net\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.355605 kubelet[2943]: I0209 19:17:48.355201 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bfe0b632-73e1-4507-b176-f46b4f29e843-kube-proxy\") pod \"kube-proxy-f4gm6\" (UID: \"bfe0b632-73e1-4507-b176-f46b4f29e843\") " pod="kube-system/kube-proxy-f4gm6" Feb 9 19:17:48.355605 kubelet[2943]: I0209 19:17:48.355250 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe0b632-73e1-4507-b176-f46b4f29e843-xtables-lock\") pod \"kube-proxy-f4gm6\" (UID: \"bfe0b632-73e1-4507-b176-f46b4f29e843\") " pod="kube-system/kube-proxy-f4gm6" Feb 9 19:17:48.355605 kubelet[2943]: I0209 19:17:48.355298 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-lib-modules\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356008 kubelet[2943]: I0209 19:17:48.355345 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-kernel\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356008 kubelet[2943]: I0209 19:17:48.355404 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe0b632-73e1-4507-b176-f46b4f29e843-lib-modules\") pod \"kube-proxy-f4gm6\" (UID: \"bfe0b632-73e1-4507-b176-f46b4f29e843\") " pod="kube-system/kube-proxy-f4gm6" Feb 9 19:17:48.356008 kubelet[2943]: I0209 19:17:48.355452 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-run\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356008 kubelet[2943]: I0209 19:17:48.355500 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-bpf-maps\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356008 kubelet[2943]: I0209 19:17:48.355545 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-config-path\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356304 kubelet[2943]: I0209 19:17:48.355597 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a42f38ea-05c6-46cb-840e-9694b7ed74a3-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-bxqd4\" (UID: \"a42f38ea-05c6-46cb-840e-9694b7ed74a3\") " pod="kube-system/cilium-operator-f59cbd8c6-bxqd4" Feb 9 19:17:48.356304 kubelet[2943]: I0209 19:17:48.355646 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrwg\" (UniqueName: \"kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-kube-api-access-6jrwg\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356304 kubelet[2943]: I0209 19:17:48.355714 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4m7z\" (UniqueName: \"kubernetes.io/projected/bfe0b632-73e1-4507-b176-f46b4f29e843-kube-api-access-k4m7z\") pod \"kube-proxy-f4gm6\" (UID: \"bfe0b632-73e1-4507-b176-f46b4f29e843\") " pod="kube-system/kube-proxy-f4gm6" Feb 9 19:17:48.356304 kubelet[2943]: I0209 19:17:48.355758 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cni-path\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356304 kubelet[2943]: I0209 19:17:48.355799 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hubble-tls\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356595 kubelet[2943]: I0209 19:17:48.355887 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-cgroup\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:48.356595 kubelet[2943]: I0209 19:17:48.355942 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z264f\" (UniqueName: \"kubernetes.io/projected/a42f38ea-05c6-46cb-840e-9694b7ed74a3-kube-api-access-z264f\") pod \"cilium-operator-f59cbd8c6-bxqd4\" (UID: \"a42f38ea-05c6-46cb-840e-9694b7ed74a3\") " pod="kube-system/cilium-operator-f59cbd8c6-bxqd4" Feb 9 19:17:48.356595 kubelet[2943]: I0209 19:17:48.355990 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-clustermesh-secrets\") pod \"cilium-cgtrd\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " pod="kube-system/cilium-cgtrd" Feb 9 19:17:49.160767 env[1826]: time="2024-02-09T19:17:49.160704821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgtrd,Uid:b34d45e3-aa3a-4a63-95d3-3e80ae3551d8,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:49.196297 env[1826]: time="2024-02-09T19:17:49.196155789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:49.196611 env[1826]: time="2024-02-09T19:17:49.196307264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:49.196611 env[1826]: time="2024-02-09T19:17:49.196344929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:49.197016 env[1826]: time="2024-02-09T19:17:49.196907909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2 pid=3250 runtime=io.containerd.runc.v2 Feb 9 19:17:49.283023 env[1826]: time="2024-02-09T19:17:49.282965896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgtrd,Uid:b34d45e3-aa3a-4a63-95d3-3e80ae3551d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\"" Feb 9 19:17:49.287245 env[1826]: time="2024-02-09T19:17:49.287182747Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:17:49.434782 env[1826]: time="2024-02-09T19:17:49.434616302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4gm6,Uid:bfe0b632-73e1-4507-b176-f46b4f29e843,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:49.461641 env[1826]: time="2024-02-09T19:17:49.461515329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:49.461945 env[1826]: time="2024-02-09T19:17:49.461596898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:49.461945 env[1826]: time="2024-02-09T19:17:49.461637755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:49.462527 env[1826]: time="2024-02-09T19:17:49.462395107Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0f32f44b3a9d49874166037d80c2d91c9689da46e38b5c2391d56d6be0ce960 pid=3292 runtime=io.containerd.runc.v2 Feb 9 19:17:49.517192 env[1826]: time="2024-02-09T19:17:49.517011587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bxqd4,Uid:a42f38ea-05c6-46cb-840e-9694b7ed74a3,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:49.530279 systemd[1]: run-containerd-runc-k8s.io-b0f32f44b3a9d49874166037d80c2d91c9689da46e38b5c2391d56d6be0ce960-runc.EVkNDW.mount: Deactivated successfully. Feb 9 19:17:49.571150 env[1826]: time="2024-02-09T19:17:49.571011859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:49.571150 env[1826]: time="2024-02-09T19:17:49.571088628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:49.571589 env[1826]: time="2024-02-09T19:17:49.571115962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:49.573003 env[1826]: time="2024-02-09T19:17:49.572811850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14 pid=3326 runtime=io.containerd.runc.v2 Feb 9 19:17:49.616299 env[1826]: time="2024-02-09T19:17:49.616237908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4gm6,Uid:bfe0b632-73e1-4507-b176-f46b4f29e843,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0f32f44b3a9d49874166037d80c2d91c9689da46e38b5c2391d56d6be0ce960\"" Feb 9 19:17:49.626788 env[1826]: time="2024-02-09T19:17:49.626721614Z" level=info msg="CreateContainer within sandbox \"b0f32f44b3a9d49874166037d80c2d91c9689da46e38b5c2391d56d6be0ce960\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:17:49.656950 env[1826]: time="2024-02-09T19:17:49.656885409Z" level=info msg="CreateContainer within sandbox \"b0f32f44b3a9d49874166037d80c2d91c9689da46e38b5c2391d56d6be0ce960\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9bc51a3db1165d7f8ed921e6fcb0e30af54af703ff49ba3beb142efa1281b9a7\"" Feb 9 19:17:49.660576 env[1826]: time="2024-02-09T19:17:49.660508767Z" level=info msg="StartContainer for \"9bc51a3db1165d7f8ed921e6fcb0e30af54af703ff49ba3beb142efa1281b9a7\"" Feb 9 19:17:49.732129 env[1826]: time="2024-02-09T19:17:49.732069326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bxqd4,Uid:a42f38ea-05c6-46cb-840e-9694b7ed74a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\"" Feb 9 19:17:49.827885 env[1826]: time="2024-02-09T19:17:49.826002619Z" level=info msg="StartContainer for \"9bc51a3db1165d7f8ed921e6fcb0e30af54af703ff49ba3beb142efa1281b9a7\" returns successfully" Feb 9 19:17:50.523587 kubelet[2943]: I0209 19:17:50.522978 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f4gm6" podStartSLOduration=2.522889031 pod.CreationTimestamp="2024-02-09 19:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:17:50.522409814 +0000 UTC m=+14.781029902" watchObservedRunningTime="2024-02-09 19:17:50.522889031 +0000 UTC m=+14.781509095" Feb 9 19:17:56.308453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854825067.mount: Deactivated successfully. Feb 9 19:18:00.284573 env[1826]: time="2024-02-09T19:18:00.284491236Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:00.287623 env[1826]: time="2024-02-09T19:18:00.287562658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:00.290672 env[1826]: time="2024-02-09T19:18:00.290609877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:00.293888 env[1826]: time="2024-02-09T19:18:00.292735118Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 19:18:00.296189 env[1826]: time="2024-02-09T19:18:00.296133052Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:18:00.300729 env[1826]: time="2024-02-09T19:18:00.300622249Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:18:00.321150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312107121.mount: Deactivated successfully. Feb 9 19:18:00.332057 env[1826]: time="2024-02-09T19:18:00.331971250Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\"" Feb 9 19:18:00.335290 env[1826]: time="2024-02-09T19:18:00.335174099Z" level=info msg="StartContainer for \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\"" Feb 9 19:18:00.451727 env[1826]: time="2024-02-09T19:18:00.451653227Z" level=info msg="StartContainer for \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\" returns successfully" Feb 9 19:18:00.754436 env[1826]: time="2024-02-09T19:18:00.754336672Z" level=info msg="shim disconnected" id=a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114 Feb 9 19:18:00.754705 env[1826]: time="2024-02-09T19:18:00.754460937Z" level=warning msg="cleaning up after shim disconnected" id=a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114 namespace=k8s.io Feb 9 19:18:00.754705 env[1826]: time="2024-02-09T19:18:00.754485872Z" level=info msg="cleaning up dead shim" Feb 9 19:18:00.770987 env[1826]: time="2024-02-09T19:18:00.770921151Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3555 runtime=io.containerd.runc.v2\n" Feb 9 19:18:01.193982 env[1826]: time="2024-02-09T19:18:01.193914881Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:18:01.231009 env[1826]: time="2024-02-09T19:18:01.230771694Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\"" Feb 9 19:18:01.234799 env[1826]: time="2024-02-09T19:18:01.234535072Z" level=info msg="StartContainer for \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\"" Feb 9 19:18:01.320982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114-rootfs.mount: Deactivated successfully. Feb 9 19:18:01.360875 env[1826]: time="2024-02-09T19:18:01.358593293Z" level=info msg="StartContainer for \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\" returns successfully" Feb 9 19:18:01.371734 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:18:01.372498 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:18:01.377997 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:18:01.381679 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:18:01.394763 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:18:01.414142 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:18:01.452182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c-rootfs.mount: Deactivated successfully. Feb 9 19:18:01.470015 env[1826]: time="2024-02-09T19:18:01.469953658Z" level=info msg="shim disconnected" id=5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c Feb 9 19:18:01.470333 env[1826]: time="2024-02-09T19:18:01.470301613Z" level=warning msg="cleaning up after shim disconnected" id=5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c namespace=k8s.io Feb 9 19:18:01.470448 env[1826]: time="2024-02-09T19:18:01.470421114Z" level=info msg="cleaning up dead shim" Feb 9 19:18:01.486652 env[1826]: time="2024-02-09T19:18:01.486594257Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3622 runtime=io.containerd.runc.v2\n" Feb 9 19:18:02.224186 env[1826]: time="2024-02-09T19:18:02.224132362Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:18:02.246744 env[1826]: time="2024-02-09T19:18:02.246681908Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\"" Feb 9 19:18:02.251352 env[1826]: time="2024-02-09T19:18:02.251274745Z" level=info msg="StartContainer for \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\"" Feb 9 19:18:02.320381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242227798.mount: Deactivated successfully. Feb 9 19:18:02.389077 env[1826]: time="2024-02-09T19:18:02.389015752Z" level=info msg="StartContainer for \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\" returns successfully" Feb 9 19:18:02.445331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3-rootfs.mount: Deactivated successfully. Feb 9 19:18:02.554687 env[1826]: time="2024-02-09T19:18:02.554519195Z" level=info msg="shim disconnected" id=a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3 Feb 9 19:18:02.554687 env[1826]: time="2024-02-09T19:18:02.554588251Z" level=warning msg="cleaning up after shim disconnected" id=a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3 namespace=k8s.io Feb 9 19:18:02.554687 env[1826]: time="2024-02-09T19:18:02.554612478Z" level=info msg="cleaning up dead shim" Feb 9 19:18:02.584517 env[1826]: time="2024-02-09T19:18:02.584432869Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3680 runtime=io.containerd.runc.v2\n" Feb 9 19:18:02.769229 env[1826]: time="2024-02-09T19:18:02.769152062Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:02.773913 env[1826]: time="2024-02-09T19:18:02.773854920Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:02.776667 env[1826]: time="2024-02-09T19:18:02.776617752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:18:02.777615 env[1826]: time="2024-02-09T19:18:02.777569682Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 19:18:02.783998 env[1826]: time="2024-02-09T19:18:02.783936102Z" level=info msg="CreateContainer within sandbox \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:18:02.804551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227358344.mount: Deactivated successfully. Feb 9 19:18:02.816810 env[1826]: time="2024-02-09T19:18:02.816749328Z" level=info msg="CreateContainer within sandbox \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\"" Feb 9 19:18:02.820284 env[1826]: time="2024-02-09T19:18:02.820195041Z" level=info msg="StartContainer for \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\"" Feb 9 19:18:02.912005 env[1826]: time="2024-02-09T19:18:02.911937213Z" level=info msg="StartContainer for \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\" returns successfully" Feb 9 19:18:03.213866 env[1826]: time="2024-02-09T19:18:03.209710441Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:18:03.243246 env[1826]: time="2024-02-09T19:18:03.243066297Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\"" Feb 9 19:18:03.245624 env[1826]: time="2024-02-09T19:18:03.244289521Z" level=info msg="StartContainer for \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\"" Feb 9 19:18:03.304346 kubelet[2943]: I0209 19:18:03.304301 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-bxqd4" podStartSLOduration=-9.22337202155053e+09 pod.CreationTimestamp="2024-02-09 19:17:48 +0000 UTC" firstStartedPulling="2024-02-09 19:17:49.760624753 +0000 UTC m=+14.019244805" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:03.23043013 +0000 UTC m=+27.489050206" watchObservedRunningTime="2024-02-09 19:18:03.304245461 +0000 UTC m=+27.562865549" Feb 9 19:18:03.432302 env[1826]: time="2024-02-09T19:18:03.432207588Z" level=info msg="StartContainer for \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\" returns successfully" Feb 9 19:18:03.525111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2-rootfs.mount: Deactivated successfully. Feb 9 19:18:03.565422 env[1826]: time="2024-02-09T19:18:03.565359416Z" level=info msg="shim disconnected" id=53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2 Feb 9 19:18:03.565755 env[1826]: time="2024-02-09T19:18:03.565717441Z" level=warning msg="cleaning up after shim disconnected" id=53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2 namespace=k8s.io Feb 9 19:18:03.565904 env[1826]: time="2024-02-09T19:18:03.565875112Z" level=info msg="cleaning up dead shim" Feb 9 19:18:03.605931 env[1826]: time="2024-02-09T19:18:03.605873117Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3773 runtime=io.containerd.runc.v2\n" Feb 9 19:18:04.217332 env[1826]: time="2024-02-09T19:18:04.217086994Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:18:04.254370 env[1826]: time="2024-02-09T19:18:04.254290808Z" level=info msg="CreateContainer within sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\"" Feb 9 19:18:04.255730 env[1826]: time="2024-02-09T19:18:04.255610268Z" level=info msg="StartContainer for \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\"" Feb 9 19:18:04.346733 systemd[1]: run-containerd-runc-k8s.io-6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c-runc.aO7TKi.mount: Deactivated successfully. Feb 9 19:18:04.496324 env[1826]: time="2024-02-09T19:18:04.496154203Z" level=info msg="StartContainer for \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\" returns successfully" Feb 9 19:18:04.556269 systemd[1]: run-containerd-runc-k8s.io-6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c-runc.U36Flv.mount: Deactivated successfully. Feb 9 19:18:04.806879 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:18:04.936900 kubelet[2943]: I0209 19:18:04.936848 2943 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:18:04.990841 kubelet[2943]: I0209 19:18:04.990750 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:18:04.994879 kubelet[2943]: I0209 19:18:04.994269 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:18:05.108741 kubelet[2943]: I0209 19:18:05.108682 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee16af1-055b-409e-9668-1555bd6088b8-config-volume\") pod \"coredns-787d4945fb-42pc7\" (UID: \"bee16af1-055b-409e-9668-1555bd6088b8\") " pod="kube-system/coredns-787d4945fb-42pc7" Feb 9 19:18:05.108958 kubelet[2943]: I0209 19:18:05.108764 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f4qj\" (UniqueName: \"kubernetes.io/projected/6f03d1d1-fac5-4f33-87dc-24aeb2c83710-kube-api-access-7f4qj\") pod \"coredns-787d4945fb-jxdd6\" (UID: \"6f03d1d1-fac5-4f33-87dc-24aeb2c83710\") " pod="kube-system/coredns-787d4945fb-jxdd6" Feb 9 19:18:05.108958 kubelet[2943]: I0209 19:18:05.108866 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpnvz\" (UniqueName: \"kubernetes.io/projected/bee16af1-055b-409e-9668-1555bd6088b8-kube-api-access-dpnvz\") pod \"coredns-787d4945fb-42pc7\" (UID: \"bee16af1-055b-409e-9668-1555bd6088b8\") " pod="kube-system/coredns-787d4945fb-42pc7" Feb 9 19:18:05.108958 kubelet[2943]: I0209 19:18:05.108921 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f03d1d1-fac5-4f33-87dc-24aeb2c83710-config-volume\") pod \"coredns-787d4945fb-jxdd6\" (UID: \"6f03d1d1-fac5-4f33-87dc-24aeb2c83710\") " pod="kube-system/coredns-787d4945fb-jxdd6" Feb 9 19:18:05.281538 kubelet[2943]: I0209 19:18:05.281481 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cgtrd" podStartSLOduration=-9.223372019573353e+09 pod.CreationTimestamp="2024-02-09 19:17:48 +0000 UTC" firstStartedPulling="2024-02-09 19:17:49.285252854 +0000 UTC m=+13.543872906" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:05.281087769 +0000 UTC m=+29.539707845" watchObservedRunningTime="2024-02-09 19:18:05.281423751 +0000 UTC m=+29.540043827" Feb 9 19:18:05.305219 env[1826]: time="2024-02-09T19:18:05.305119393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jxdd6,Uid:6f03d1d1-fac5-4f33-87dc-24aeb2c83710,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:05.316677 env[1826]: time="2024-02-09T19:18:05.316603027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-42pc7,Uid:bee16af1-055b-409e-9668-1555bd6088b8,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:05.893863 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:18:07.726373 (udev-worker)[3897]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:07.726509 (udev-worker)[3898]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:07.728745 systemd-networkd[1600]: cilium_host: Link UP Feb 9 19:18:07.736047 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:18:07.736215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:18:07.738078 systemd-networkd[1600]: cilium_net: Link UP Feb 9 19:18:07.738509 systemd-networkd[1600]: cilium_net: Gained carrier Feb 9 19:18:07.738866 systemd-networkd[1600]: cilium_host: Gained carrier Feb 9 19:18:07.835589 systemd-networkd[1600]: cilium_net: Gained IPv6LL Feb 9 19:18:07.898696 systemd-networkd[1600]: cilium_vxlan: Link UP Feb 9 19:18:07.898710 systemd-networkd[1600]: cilium_vxlan: Gained carrier Feb 9 19:18:08.373859 kernel: NET: Registered PF_ALG protocol family Feb 9 19:18:08.395107 systemd-networkd[1600]: cilium_host: Gained IPv6LL Feb 9 19:18:09.355065 systemd-networkd[1600]: cilium_vxlan: Gained IPv6LL Feb 9 19:18:09.691993 systemd-networkd[1600]: lxc_health: Link UP Feb 9 19:18:09.701627 systemd-networkd[1600]: lxc_health: Gained carrier Feb 9 19:18:09.701869 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:18:10.444618 systemd-networkd[1600]: lxc479528064988: Link UP Feb 9 19:18:10.476766 systemd-networkd[1600]: lxc60f1a63c6fec: Link UP Feb 9 19:18:10.487977 kernel: eth0: renamed from tmpbdda4 Feb 9 19:18:10.496565 kernel: eth0: renamed from tmpd758b Feb 9 19:18:10.514835 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc479528064988: link becomes ready Feb 9 19:18:10.514984 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc60f1a63c6fec: link becomes ready Feb 9 19:18:10.515225 systemd-networkd[1600]: lxc479528064988: Gained carrier Feb 9 19:18:10.515621 systemd-networkd[1600]: lxc60f1a63c6fec: Gained carrier Feb 9 19:18:10.527360 (udev-worker)[3941]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:10.827604 systemd-networkd[1600]: lxc_health: Gained IPv6LL Feb 9 19:18:11.915611 systemd-networkd[1600]: lxc60f1a63c6fec: Gained IPv6LL Feb 9 19:18:12.299652 systemd-networkd[1600]: lxc479528064988: Gained IPv6LL Feb 9 19:18:18.833837 env[1826]: time="2024-02-09T19:18:18.833341419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:18.833837 env[1826]: time="2024-02-09T19:18:18.833415216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:18.833837 env[1826]: time="2024-02-09T19:18:18.833440835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:18.835939 env[1826]: time="2024-02-09T19:18:18.834783734Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdda4d1e2fbf53c85a8f9ec6a1c8ee60aeaec40b9cdf906395c5b146ccbc86cf pid=4306 runtime=io.containerd.runc.v2 Feb 9 19:18:18.909252 systemd[1]: run-containerd-runc-k8s.io-bdda4d1e2fbf53c85a8f9ec6a1c8ee60aeaec40b9cdf906395c5b146ccbc86cf-runc.p3YQPD.mount: Deactivated successfully. Feb 9 19:18:18.950873 env[1826]: time="2024-02-09T19:18:18.943515197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:18.950873 env[1826]: time="2024-02-09T19:18:18.943604077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:18.950873 env[1826]: time="2024-02-09T19:18:18.943631364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:18.950873 env[1826]: time="2024-02-09T19:18:18.943930139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d758b5359fe44c35c47d80bddce783a431fd089200350c58f5000113cf500886 pid=4334 runtime=io.containerd.runc.v2 Feb 9 19:18:19.066255 env[1826]: time="2024-02-09T19:18:19.066191343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-42pc7,Uid:bee16af1-055b-409e-9668-1555bd6088b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdda4d1e2fbf53c85a8f9ec6a1c8ee60aeaec40b9cdf906395c5b146ccbc86cf\"" Feb 9 19:18:19.077036 env[1826]: time="2024-02-09T19:18:19.076200635Z" level=info msg="CreateContainer within sandbox \"bdda4d1e2fbf53c85a8f9ec6a1c8ee60aeaec40b9cdf906395c5b146ccbc86cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:18:19.134348 env[1826]: time="2024-02-09T19:18:19.134259986Z" level=info msg="CreateContainer within sandbox \"bdda4d1e2fbf53c85a8f9ec6a1c8ee60aeaec40b9cdf906395c5b146ccbc86cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba2671401c214db2da997884be6643e84fa9b394d6183f0b209adfe0b9143919\"" Feb 9 19:18:19.135490 env[1826]: time="2024-02-09T19:18:19.135412190Z" level=info msg="StartContainer for \"ba2671401c214db2da997884be6643e84fa9b394d6183f0b209adfe0b9143919\"" Feb 9 19:18:19.216558 env[1826]: time="2024-02-09T19:18:19.211275967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jxdd6,Uid:6f03d1d1-fac5-4f33-87dc-24aeb2c83710,Namespace:kube-system,Attempt:0,} returns sandbox id \"d758b5359fe44c35c47d80bddce783a431fd089200350c58f5000113cf500886\"" Feb 9 19:18:19.242711 env[1826]: time="2024-02-09T19:18:19.236271978Z" level=info msg="CreateContainer within sandbox \"d758b5359fe44c35c47d80bddce783a431fd089200350c58f5000113cf500886\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:18:19.295335 env[1826]: time="2024-02-09T19:18:19.295255146Z" level=info msg="CreateContainer within sandbox \"d758b5359fe44c35c47d80bddce783a431fd089200350c58f5000113cf500886\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6ec7124472c09fb6076b03bc5d06bea8718ee483ea45ccaf09078fed1131912\"" Feb 9 19:18:19.297295 env[1826]: time="2024-02-09T19:18:19.297239838Z" level=info msg="StartContainer for \"b6ec7124472c09fb6076b03bc5d06bea8718ee483ea45ccaf09078fed1131912\"" Feb 9 19:18:19.353535 env[1826]: time="2024-02-09T19:18:19.353472014Z" level=info msg="StartContainer for \"ba2671401c214db2da997884be6643e84fa9b394d6183f0b209adfe0b9143919\" returns successfully" Feb 9 19:18:19.452393 env[1826]: time="2024-02-09T19:18:19.452302136Z" level=info msg="StartContainer for \"b6ec7124472c09fb6076b03bc5d06bea8718ee483ea45ccaf09078fed1131912\" returns successfully" Feb 9 19:18:20.320909 kubelet[2943]: I0209 19:18:20.320833 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-42pc7" podStartSLOduration=32.320762982 pod.CreationTimestamp="2024-02-09 19:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:20.319627805 +0000 UTC m=+44.578247881" watchObservedRunningTime="2024-02-09 19:18:20.320762982 +0000 UTC m=+44.579383070" Feb 9 19:18:20.369761 kubelet[2943]: I0209 19:18:20.369706 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jxdd6" podStartSLOduration=32.369623892 pod.CreationTimestamp="2024-02-09 19:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:20.364910021 +0000 UTC m=+44.623530097" watchObservedRunningTime="2024-02-09 19:18:20.369623892 +0000 UTC m=+44.628243968" Feb 9 19:18:36.551005 systemd[1]: Started sshd@5-172.31.24.80:22-147.75.109.163:41286.service. Feb 9 19:18:36.719250 sshd[4511]: Accepted publickey for core from 147.75.109.163 port 41286 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:36.721731 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:36.730116 systemd-logind[1804]: New session 6 of user core. Feb 9 19:18:36.731074 systemd[1]: Started session-6.scope. Feb 9 19:18:37.013175 sshd[4511]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:37.020667 systemd[1]: sshd@5-172.31.24.80:22-147.75.109.163:41286.service: Deactivated successfully. Feb 9 19:18:37.022269 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:18:37.023800 systemd-logind[1804]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:18:37.026454 systemd-logind[1804]: Removed session 6. Feb 9 19:18:42.038893 systemd[1]: Started sshd@6-172.31.24.80:22-147.75.109.163:41298.service. Feb 9 19:18:42.205298 sshd[4527]: Accepted publickey for core from 147.75.109.163 port 41298 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:42.207740 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:42.216119 systemd-logind[1804]: New session 7 of user core. Feb 9 19:18:42.217166 systemd[1]: Started session-7.scope. Feb 9 19:18:42.462193 sshd[4527]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:42.467500 systemd[1]: sshd@6-172.31.24.80:22-147.75.109.163:41298.service: Deactivated successfully. Feb 9 19:18:42.470307 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:18:42.471148 systemd-logind[1804]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:18:42.473279 systemd-logind[1804]: Removed session 7. Feb 9 19:18:47.489939 systemd[1]: Started sshd@7-172.31.24.80:22-147.75.109.163:50104.service. Feb 9 19:18:47.656704 sshd[4543]: Accepted publickey for core from 147.75.109.163 port 50104 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:47.659275 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:47.668840 systemd-logind[1804]: New session 8 of user core. Feb 9 19:18:47.669734 systemd[1]: Started session-8.scope. Feb 9 19:18:47.922429 sshd[4543]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:47.928038 systemd-logind[1804]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:18:47.929583 systemd[1]: sshd@7-172.31.24.80:22-147.75.109.163:50104.service: Deactivated successfully. Feb 9 19:18:47.932080 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:18:47.935657 systemd-logind[1804]: Removed session 8. Feb 9 19:18:52.950335 systemd[1]: Started sshd@8-172.31.24.80:22-147.75.109.163:50116.service. Feb 9 19:18:53.122656 sshd[4558]: Accepted publickey for core from 147.75.109.163 port 50116 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:53.125879 sshd[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:53.134711 systemd[1]: Started session-9.scope. Feb 9 19:18:53.135446 systemd-logind[1804]: New session 9 of user core. Feb 9 19:18:53.382724 sshd[4558]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:53.387791 systemd-logind[1804]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:18:53.388807 systemd[1]: sshd@8-172.31.24.80:22-147.75.109.163:50116.service: Deactivated successfully. Feb 9 19:18:53.391524 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:18:53.394637 systemd-logind[1804]: Removed session 9. Feb 9 19:18:58.410506 systemd[1]: Started sshd@9-172.31.24.80:22-147.75.109.163:48992.service. Feb 9 19:18:58.579520 sshd[4572]: Accepted publickey for core from 147.75.109.163 port 48992 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:58.582077 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:58.590519 systemd[1]: Started session-10.scope. Feb 9 19:18:58.591277 systemd-logind[1804]: New session 10 of user core. Feb 9 19:18:58.841053 sshd[4572]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:58.847004 systemd[1]: sshd@9-172.31.24.80:22-147.75.109.163:48992.service: Deactivated successfully. Feb 9 19:18:58.849132 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:18:58.849436 systemd-logind[1804]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:18:58.852464 systemd-logind[1804]: Removed session 10. Feb 9 19:18:58.869613 systemd[1]: Started sshd@10-172.31.24.80:22-147.75.109.163:49002.service. Feb 9 19:18:59.045687 sshd[4585]: Accepted publickey for core from 147.75.109.163 port 49002 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:59.048207 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:59.056927 systemd-logind[1804]: New session 11 of user core. Feb 9 19:18:59.057156 systemd[1]: Started session-11.scope. Feb 9 19:19:00.812714 sshd[4585]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:00.818257 systemd[1]: sshd@10-172.31.24.80:22-147.75.109.163:49002.service: Deactivated successfully. Feb 9 19:19:00.820633 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:19:00.821666 systemd-logind[1804]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:19:00.823545 systemd-logind[1804]: Removed session 11. Feb 9 19:19:00.837692 systemd[1]: Started sshd@11-172.31.24.80:22-147.75.109.163:49016.service. Feb 9 19:19:01.011704 sshd[4596]: Accepted publickey for core from 147.75.109.163 port 49016 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:01.014432 sshd[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:01.023689 systemd[1]: Started session-12.scope. Feb 9 19:19:01.024495 systemd-logind[1804]: New session 12 of user core. Feb 9 19:19:01.270174 sshd[4596]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:01.275284 systemd-logind[1804]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:19:01.276082 systemd[1]: sshd@11-172.31.24.80:22-147.75.109.163:49016.service: Deactivated successfully. Feb 9 19:19:01.277614 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:19:01.280322 systemd-logind[1804]: Removed session 12. Feb 9 19:19:06.296782 systemd[1]: Started sshd@12-172.31.24.80:22-147.75.109.163:59092.service. Feb 9 19:19:06.470075 sshd[4609]: Accepted publickey for core from 147.75.109.163 port 59092 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:06.472500 sshd[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:06.480651 systemd-logind[1804]: New session 13 of user core. Feb 9 19:19:06.481642 systemd[1]: Started session-13.scope. Feb 9 19:19:06.724183 sshd[4609]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:06.729371 systemd[1]: sshd@12-172.31.24.80:22-147.75.109.163:59092.service: Deactivated successfully. Feb 9 19:19:06.730917 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:19:06.733306 systemd-logind[1804]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:19:06.735369 systemd-logind[1804]: Removed session 13. Feb 9 19:19:11.750764 systemd[1]: Started sshd@13-172.31.24.80:22-147.75.109.163:59106.service. Feb 9 19:19:11.923908 sshd[4623]: Accepted publickey for core from 147.75.109.163 port 59106 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:11.927080 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:11.934911 systemd-logind[1804]: New session 14 of user core. Feb 9 19:19:11.936225 systemd[1]: Started session-14.scope. Feb 9 19:19:12.201353 sshd[4623]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:12.206156 systemd[1]: sshd@13-172.31.24.80:22-147.75.109.163:59106.service: Deactivated successfully. Feb 9 19:19:12.207729 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:19:12.210357 systemd-logind[1804]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:19:12.212990 systemd-logind[1804]: Removed session 14. Feb 9 19:19:17.228105 systemd[1]: Started sshd@14-172.31.24.80:22-147.75.109.163:47368.service. Feb 9 19:19:17.402684 sshd[4636]: Accepted publickey for core from 147.75.109.163 port 47368 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:17.405265 sshd[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:17.414890 systemd[1]: Started session-15.scope. Feb 9 19:19:17.414975 systemd-logind[1804]: New session 15 of user core. Feb 9 19:19:17.694726 sshd[4636]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:17.700008 systemd[1]: sshd@14-172.31.24.80:22-147.75.109.163:47368.service: Deactivated successfully. Feb 9 19:19:17.702274 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:19:17.702302 systemd-logind[1804]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:19:17.704285 systemd-logind[1804]: Removed session 15. Feb 9 19:19:17.720895 systemd[1]: Started sshd@15-172.31.24.80:22-147.75.109.163:47384.service. Feb 9 19:19:17.892462 sshd[4649]: Accepted publickey for core from 147.75.109.163 port 47384 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:17.895492 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:17.903504 systemd-logind[1804]: New session 16 of user core. Feb 9 19:19:17.904505 systemd[1]: Started session-16.scope. Feb 9 19:19:18.210406 sshd[4649]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:18.215952 systemd-logind[1804]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:19:18.217159 systemd[1]: sshd@15-172.31.24.80:22-147.75.109.163:47384.service: Deactivated successfully. Feb 9 19:19:18.218700 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:19:18.222371 systemd-logind[1804]: Removed session 16. Feb 9 19:19:18.237744 systemd[1]: Started sshd@16-172.31.24.80:22-147.75.109.163:47394.service. Feb 9 19:19:18.412477 sshd[4659]: Accepted publickey for core from 147.75.109.163 port 47394 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:18.414116 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:18.421778 systemd-logind[1804]: New session 17 of user core. Feb 9 19:19:18.423563 systemd[1]: Started session-17.scope. Feb 9 19:19:19.757742 sshd[4659]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:19.764230 systemd-logind[1804]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:19:19.764655 systemd[1]: sshd@16-172.31.24.80:22-147.75.109.163:47394.service: Deactivated successfully. Feb 9 19:19:19.766397 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:19:19.767684 systemd-logind[1804]: Removed session 17. Feb 9 19:19:19.784527 systemd[1]: Started sshd@17-172.31.24.80:22-147.75.109.163:47400.service. Feb 9 19:19:19.969178 sshd[4676]: Accepted publickey for core from 147.75.109.163 port 47400 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:19.971995 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:19.980728 systemd-logind[1804]: New session 18 of user core. Feb 9 19:19:19.981741 systemd[1]: Started session-18.scope. Feb 9 19:19:20.416132 sshd[4676]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:20.421114 systemd[1]: sshd@17-172.31.24.80:22-147.75.109.163:47400.service: Deactivated successfully. Feb 9 19:19:20.423530 systemd-logind[1804]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:19:20.423683 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:19:20.426618 systemd-logind[1804]: Removed session 18. Feb 9 19:19:20.440376 systemd[1]: Started sshd@18-172.31.24.80:22-147.75.109.163:47402.service. Feb 9 19:19:20.617362 sshd[4739]: Accepted publickey for core from 147.75.109.163 port 47402 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:20.620557 sshd[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:20.629960 systemd[1]: Started session-19.scope. Feb 9 19:19:20.630957 systemd-logind[1804]: New session 19 of user core. Feb 9 19:19:20.887144 sshd[4739]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:20.892737 systemd[1]: sshd@18-172.31.24.80:22-147.75.109.163:47402.service: Deactivated successfully. Feb 9 19:19:20.894212 systemd-logind[1804]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:19:20.896311 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:19:20.899182 systemd-logind[1804]: Removed session 19. Feb 9 19:19:25.913011 systemd[1]: Started sshd@19-172.31.24.80:22-147.75.109.163:57082.service. Feb 9 19:19:26.078709 sshd[4752]: Accepted publickey for core from 147.75.109.163 port 57082 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:26.082555 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:26.090712 systemd-logind[1804]: New session 20 of user core. Feb 9 19:19:26.091192 systemd[1]: Started session-20.scope. Feb 9 19:19:26.339300 sshd[4752]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:26.343907 systemd-logind[1804]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:19:26.345388 systemd[1]: sshd@19-172.31.24.80:22-147.75.109.163:57082.service: Deactivated successfully. Feb 9 19:19:26.347589 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:19:26.349984 systemd-logind[1804]: Removed session 20. Feb 9 19:19:31.366853 systemd[1]: Started sshd@20-172.31.24.80:22-147.75.109.163:57090.service. Feb 9 19:19:31.543749 sshd[4792]: Accepted publickey for core from 147.75.109.163 port 57090 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:31.546389 sshd[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:31.555740 systemd[1]: Started session-21.scope. Feb 9 19:19:31.556923 systemd-logind[1804]: New session 21 of user core. Feb 9 19:19:31.804460 sshd[4792]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:31.809480 systemd-logind[1804]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:19:31.810103 systemd[1]: sshd@20-172.31.24.80:22-147.75.109.163:57090.service: Deactivated successfully. Feb 9 19:19:31.811696 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:19:31.813753 systemd-logind[1804]: Removed session 21. Feb 9 19:19:36.829164 systemd[1]: Started sshd@21-172.31.24.80:22-147.75.109.163:53012.service. Feb 9 19:19:37.002263 sshd[4805]: Accepted publickey for core from 147.75.109.163 port 53012 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:37.004808 sshd[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:37.013926 systemd-logind[1804]: New session 22 of user core. Feb 9 19:19:37.013987 systemd[1]: Started session-22.scope. Feb 9 19:19:37.271666 sshd[4805]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:37.277355 systemd-logind[1804]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:19:37.278023 systemd[1]: sshd@21-172.31.24.80:22-147.75.109.163:53012.service: Deactivated successfully. Feb 9 19:19:37.281170 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:19:37.283417 systemd-logind[1804]: Removed session 22. Feb 9 19:19:42.297992 systemd[1]: Started sshd@22-172.31.24.80:22-147.75.109.163:53028.service. Feb 9 19:19:42.467549 sshd[4820]: Accepted publickey for core from 147.75.109.163 port 53028 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:42.470085 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:42.479533 systemd[1]: Started session-23.scope. Feb 9 19:19:42.480240 systemd-logind[1804]: New session 23 of user core. Feb 9 19:19:42.718531 sshd[4820]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:42.723690 systemd-logind[1804]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:19:42.724746 systemd[1]: sshd@22-172.31.24.80:22-147.75.109.163:53028.service: Deactivated successfully. Feb 9 19:19:42.726959 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:19:42.728394 systemd-logind[1804]: Removed session 23. Feb 9 19:19:42.744898 systemd[1]: Started sshd@23-172.31.24.80:22-147.75.109.163:53044.service. Feb 9 19:19:42.911504 sshd[4833]: Accepted publickey for core from 147.75.109.163 port 53044 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:42.914649 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:42.923339 systemd[1]: Started session-24.scope. Feb 9 19:19:42.924686 systemd-logind[1804]: New session 24 of user core. Feb 9 19:19:43.882972 amazon-ssm-agent[1783]: 2024-02-09 19:19:43 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:19:45.737314 env[1826]: time="2024-02-09T19:19:45.735660643Z" level=info msg="StopContainer for \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\" with timeout 30 (s)" Feb 9 19:19:45.738947 env[1826]: time="2024-02-09T19:19:45.738886087Z" level=info msg="Stop container \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\" with signal terminated" Feb 9 19:19:45.796050 env[1826]: time="2024-02-09T19:19:45.795971453Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:19:45.806218 env[1826]: time="2024-02-09T19:19:45.806163104Z" level=info msg="StopContainer for \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\" with timeout 1 (s)" Feb 9 19:19:45.808593 env[1826]: time="2024-02-09T19:19:45.808537512Z" level=info msg="Stop container \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\" with signal terminated" Feb 9 19:19:45.834557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262-rootfs.mount: Deactivated successfully. Feb 9 19:19:45.850749 systemd-networkd[1600]: lxc_health: Link DOWN Feb 9 19:19:45.850769 systemd-networkd[1600]: lxc_health: Lost carrier Feb 9 19:19:45.889063 env[1826]: time="2024-02-09T19:19:45.888994508Z" level=info msg="shim disconnected" id=a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262 Feb 9 19:19:45.889589 env[1826]: time="2024-02-09T19:19:45.889534246Z" level=warning msg="cleaning up after shim disconnected" id=a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262 namespace=k8s.io Feb 9 19:19:45.889775 env[1826]: time="2024-02-09T19:19:45.889742586Z" level=info msg="cleaning up dead shim" Feb 9 19:19:45.972705 env[1826]: time="2024-02-09T19:19:45.972654737Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4894 runtime=io.containerd.runc.v2\n" Feb 9 19:19:45.976197 env[1826]: time="2024-02-09T19:19:45.976136412Z" level=info msg="StopContainer for \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\" returns successfully" Feb 9 19:19:45.977431 env[1826]: time="2024-02-09T19:19:45.977383765Z" level=info msg="StopPodSandbox for \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\"" Feb 9 19:19:45.977754 env[1826]: time="2024-02-09T19:19:45.977716663Z" level=info msg="Container to stop \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:19:45.981663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14-shm.mount: Deactivated successfully. Feb 9 19:19:46.024920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c-rootfs.mount: Deactivated successfully. Feb 9 19:19:46.043357 env[1826]: time="2024-02-09T19:19:46.043294011Z" level=info msg="shim disconnected" id=6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c Feb 9 19:19:46.043833 env[1826]: time="2024-02-09T19:19:46.043765242Z" level=warning msg="cleaning up after shim disconnected" id=6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c namespace=k8s.io Feb 9 19:19:46.044005 env[1826]: time="2024-02-09T19:19:46.043972310Z" level=info msg="cleaning up dead shim" Feb 9 19:19:46.061082 env[1826]: time="2024-02-09T19:19:46.061026762Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4932 runtime=io.containerd.runc.v2\n" Feb 9 19:19:46.064273 env[1826]: time="2024-02-09T19:19:46.064215654Z" level=info msg="StopContainer for \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\" returns successfully" Feb 9 19:19:46.065288 env[1826]: time="2024-02-09T19:19:46.065231135Z" level=info msg="StopPodSandbox for \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\"" Feb 9 19:19:46.065450 env[1826]: time="2024-02-09T19:19:46.065335677Z" level=info msg="Container to stop \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:19:46.065450 env[1826]: time="2024-02-09T19:19:46.065368497Z" level=info msg="Container to stop \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:19:46.065450 env[1826]: time="2024-02-09T19:19:46.065395280Z" level=info msg="Container to stop \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:19:46.065450 env[1826]: time="2024-02-09T19:19:46.065422160Z" level=info msg="Container to stop \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:19:46.065785 env[1826]: time="2024-02-09T19:19:46.065449495Z" level=info msg="Container to stop \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:19:46.069017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2-shm.mount: Deactivated successfully. Feb 9 19:19:46.093465 env[1826]: time="2024-02-09T19:19:46.093387767Z" level=info msg="shim disconnected" id=dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14 Feb 9 19:19:46.093465 env[1826]: time="2024-02-09T19:19:46.093458794Z" level=warning msg="cleaning up after shim disconnected" id=dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14 namespace=k8s.io Feb 9 19:19:46.093897 env[1826]: time="2024-02-09T19:19:46.093481534Z" level=info msg="cleaning up dead shim" Feb 9 19:19:46.114844 env[1826]: time="2024-02-09T19:19:46.114756319Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4962 runtime=io.containerd.runc.v2\n" Feb 9 19:19:46.116152 env[1826]: time="2024-02-09T19:19:46.116080602Z" level=info msg="TearDown network for sandbox \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" successfully" Feb 9 19:19:46.116405 env[1826]: time="2024-02-09T19:19:46.116360461Z" level=info msg="StopPodSandbox for \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" returns successfully" Feb 9 19:19:46.138870 env[1826]: time="2024-02-09T19:19:46.136149494Z" level=info msg="shim disconnected" id=26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2 Feb 9 19:19:46.138870 env[1826]: time="2024-02-09T19:19:46.136246668Z" level=warning msg="cleaning up after shim disconnected" id=26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2 namespace=k8s.io Feb 9 19:19:46.138870 env[1826]: time="2024-02-09T19:19:46.136270967Z" level=info msg="cleaning up dead shim" Feb 9 19:19:46.155034 env[1826]: time="2024-02-09T19:19:46.154958637Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4988 runtime=io.containerd.runc.v2\n" Feb 9 19:19:46.155596 env[1826]: time="2024-02-09T19:19:46.155549470Z" level=info msg="TearDown network for sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" successfully" Feb 9 19:19:46.155717 env[1826]: time="2024-02-09T19:19:46.155596941Z" level=info msg="StopPodSandbox for \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" returns successfully" Feb 9 19:19:46.171143 kubelet[2943]: I0209 19:19:46.170593 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z264f\" (UniqueName: \"kubernetes.io/projected/a42f38ea-05c6-46cb-840e-9694b7ed74a3-kube-api-access-z264f\") pod \"a42f38ea-05c6-46cb-840e-9694b7ed74a3\" (UID: \"a42f38ea-05c6-46cb-840e-9694b7ed74a3\") " Feb 9 19:19:46.171143 kubelet[2943]: I0209 19:19:46.170680 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a42f38ea-05c6-46cb-840e-9694b7ed74a3-cilium-config-path\") pod \"a42f38ea-05c6-46cb-840e-9694b7ed74a3\" (UID: \"a42f38ea-05c6-46cb-840e-9694b7ed74a3\") " Feb 9 19:19:46.171941 kubelet[2943]: W0209 19:19:46.171171 2943 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a42f38ea-05c6-46cb-840e-9694b7ed74a3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:19:46.176920 kubelet[2943]: I0209 19:19:46.176864 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42f38ea-05c6-46cb-840e-9694b7ed74a3-kube-api-access-z264f" (OuterVolumeSpecName: "kube-api-access-z264f") pod "a42f38ea-05c6-46cb-840e-9694b7ed74a3" (UID: "a42f38ea-05c6-46cb-840e-9694b7ed74a3"). InnerVolumeSpecName "kube-api-access-z264f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:19:46.178636 kubelet[2943]: I0209 19:19:46.178570 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42f38ea-05c6-46cb-840e-9694b7ed74a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a42f38ea-05c6-46cb-840e-9694b7ed74a3" (UID: "a42f38ea-05c6-46cb-840e-9694b7ed74a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:19:46.271399 kubelet[2943]: I0209 19:19:46.271344 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hubble-tls\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271582 kubelet[2943]: I0209 19:19:46.271418 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-kernel\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271582 kubelet[2943]: I0209 19:19:46.271464 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cni-path\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271582 kubelet[2943]: I0209 19:19:46.271504 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-xtables-lock\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271582 kubelet[2943]: I0209 19:19:46.271543 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-bpf-maps\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271582 kubelet[2943]: I0209 19:19:46.271580 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-cgroup\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271945 kubelet[2943]: I0209 19:19:46.271623 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-net\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271945 kubelet[2943]: I0209 19:19:46.271667 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jrwg\" (UniqueName: \"kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-kube-api-access-6jrwg\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271945 kubelet[2943]: I0209 19:19:46.271706 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hostproc\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271945 kubelet[2943]: I0209 19:19:46.271742 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-etc-cni-netd\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.271945 kubelet[2943]: I0209 19:19:46.271784 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-config-path\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.272256 kubelet[2943]: W0209 19:19:46.272171 2943 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:19:46.272424 kubelet[2943]: I0209 19:19:46.272399 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-clustermesh-secrets\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.272581 kubelet[2943]: I0209 19:19:46.272559 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-lib-modules\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.272725 kubelet[2943]: I0209 19:19:46.272704 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-run\") pod \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\" (UID: \"b34d45e3-aa3a-4a63-95d3-3e80ae3551d8\") " Feb 9 19:19:46.272915 kubelet[2943]: I0209 19:19:46.272893 2943 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-z264f\" (UniqueName: \"kubernetes.io/projected/a42f38ea-05c6-46cb-840e-9694b7ed74a3-kube-api-access-z264f\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.273088 kubelet[2943]: I0209 19:19:46.273067 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a42f38ea-05c6-46cb-840e-9694b7ed74a3-cilium-config-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.273229 kubelet[2943]: I0209 19:19:46.273203 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.274236 kubelet[2943]: I0209 19:19:46.274198 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.274437 kubelet[2943]: I0209 19:19:46.274411 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cni-path" (OuterVolumeSpecName: "cni-path") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.274585 kubelet[2943]: I0209 19:19:46.274559 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.274739 kubelet[2943]: I0209 19:19:46.274714 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.280300 kubelet[2943]: I0209 19:19:46.274873 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.280300 kubelet[2943]: I0209 19:19:46.274904 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.280300 kubelet[2943]: I0209 19:19:46.275257 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.280300 kubelet[2943]: I0209 19:19:46.275405 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hostproc" (OuterVolumeSpecName: "hostproc") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.280300 kubelet[2943]: I0209 19:19:46.275436 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:46.282622 kubelet[2943]: I0209 19:19:46.280115 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:19:46.283120 kubelet[2943]: I0209 19:19:46.283079 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:19:46.286600 kubelet[2943]: I0209 19:19:46.286550 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:19:46.291324 kubelet[2943]: I0209 19:19:46.291271 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-kube-api-access-6jrwg" (OuterVolumeSpecName: "kube-api-access-6jrwg") pod "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" (UID: "b34d45e3-aa3a-4a63-95d3-3e80ae3551d8"). InnerVolumeSpecName "kube-api-access-6jrwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:19:46.374013 kubelet[2943]: I0209 19:19:46.373971 2943 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-clustermesh-secrets\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374022 2943 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-lib-modules\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374051 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-run\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374076 2943 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-kernel\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374101 2943 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hubble-tls\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374126 2943 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cni-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374149 2943 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-xtables-lock\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374171 2943 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-bpf-maps\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374197 kubelet[2943]: I0209 19:19:46.374193 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-cgroup\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374651 kubelet[2943]: I0209 19:19:46.374217 2943 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-6jrwg\" (UniqueName: \"kubernetes.io/projected/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-kube-api-access-6jrwg\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374651 kubelet[2943]: I0209 19:19:46.374240 2943 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-host-proc-sys-net\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374651 kubelet[2943]: I0209 19:19:46.374263 2943 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-hostproc\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374651 kubelet[2943]: I0209 19:19:46.374285 2943 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-etc-cni-netd\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.374651 kubelet[2943]: I0209 19:19:46.374309 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8-cilium-config-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:46.535445 kubelet[2943]: I0209 19:19:46.534321 2943 scope.go:115] "RemoveContainer" containerID="a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262" Feb 9 19:19:46.554487 env[1826]: time="2024-02-09T19:19:46.554409920Z" level=info msg="RemoveContainer for \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\"" Feb 9 19:19:46.568231 env[1826]: time="2024-02-09T19:19:46.568157451Z" level=info msg="RemoveContainer for \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\" returns successfully" Feb 9 19:19:46.569002 kubelet[2943]: I0209 19:19:46.568957 2943 scope.go:115] "RemoveContainer" containerID="a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262" Feb 9 19:19:46.571067 env[1826]: time="2024-02-09T19:19:46.569959121Z" level=error msg="ContainerStatus for \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\": not found" Feb 9 19:19:46.573672 kubelet[2943]: E0209 19:19:46.573495 2943 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\": not found" containerID="a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262" Feb 9 19:19:46.573672 kubelet[2943]: I0209 19:19:46.573593 2943 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262} err="failed to get container status \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\": rpc error: code = NotFound desc = an error occurred when try to find container \"a23d9e4751a5cbf3d283754dcb52e9447368961d90344a6c3e7d75f40cefb262\": not found" Feb 9 19:19:46.573672 kubelet[2943]: I0209 19:19:46.573621 2943 scope.go:115] "RemoveContainer" containerID="6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c" Feb 9 19:19:46.589002 env[1826]: time="2024-02-09T19:19:46.583615045Z" level=info msg="RemoveContainer for \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\"" Feb 9 19:19:46.597239 env[1826]: time="2024-02-09T19:19:46.597159143Z" level=info msg="RemoveContainer for \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\" returns successfully" Feb 9 19:19:46.598182 kubelet[2943]: I0209 19:19:46.598146 2943 scope.go:115] "RemoveContainer" containerID="53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2" Feb 9 19:19:46.605688 env[1826]: time="2024-02-09T19:19:46.604682906Z" level=info msg="RemoveContainer for \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\"" Feb 9 19:19:46.620473 env[1826]: time="2024-02-09T19:19:46.620350772Z" level=info msg="RemoveContainer for \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\" returns successfully" Feb 9 19:19:46.621212 kubelet[2943]: I0209 19:19:46.621183 2943 scope.go:115] "RemoveContainer" containerID="a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3" Feb 9 19:19:46.625589 env[1826]: time="2024-02-09T19:19:46.625533127Z" level=info msg="RemoveContainer for \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\"" Feb 9 19:19:46.630254 env[1826]: time="2024-02-09T19:19:46.630188752Z" level=info msg="RemoveContainer for \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\" returns successfully" Feb 9 19:19:46.630732 kubelet[2943]: I0209 19:19:46.630684 2943 scope.go:115] "RemoveContainer" containerID="5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c" Feb 9 19:19:46.633191 env[1826]: time="2024-02-09T19:19:46.633084921Z" level=info msg="RemoveContainer for \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\"" Feb 9 19:19:46.637903 env[1826]: time="2024-02-09T19:19:46.637839988Z" level=info msg="RemoveContainer for \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\" returns successfully" Feb 9 19:19:46.638351 kubelet[2943]: I0209 19:19:46.638312 2943 scope.go:115] "RemoveContainer" containerID="a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114" Feb 9 19:19:46.640862 env[1826]: time="2024-02-09T19:19:46.640578709Z" level=info msg="RemoveContainer for \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\"" Feb 9 19:19:46.646096 env[1826]: time="2024-02-09T19:19:46.645998515Z" level=info msg="RemoveContainer for \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\" returns successfully" Feb 9 19:19:46.646586 kubelet[2943]: I0209 19:19:46.646404 2943 scope.go:115] "RemoveContainer" containerID="6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c" Feb 9 19:19:46.647088 env[1826]: time="2024-02-09T19:19:46.647003640Z" level=error msg="ContainerStatus for \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\": not found" Feb 9 19:19:46.647499 kubelet[2943]: E0209 19:19:46.647441 2943 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\": not found" containerID="6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c" Feb 9 19:19:46.647683 kubelet[2943]: I0209 19:19:46.647502 2943 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c} err="failed to get container status \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d04166b3ba4b6c326e73d693220aee4eb37ba81c867b182eb97257ee4d8ff5c\": not found" Feb 9 19:19:46.647683 kubelet[2943]: I0209 19:19:46.647527 2943 scope.go:115] "RemoveContainer" containerID="53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2" Feb 9 19:19:46.648326 env[1826]: time="2024-02-09T19:19:46.648244645Z" level=error msg="ContainerStatus for \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\": not found" Feb 9 19:19:46.648854 kubelet[2943]: E0209 19:19:46.648700 2943 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\": not found" containerID="53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2" Feb 9 19:19:46.648854 kubelet[2943]: I0209 19:19:46.648755 2943 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2} err="failed to get container status \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\": rpc error: code = NotFound desc = an error occurred when try to find container \"53cb4c5bda6c3905599230bc79a21ad2d05c4eb2b1a9a204eb81647f71189ba2\": not found" Feb 9 19:19:46.648854 kubelet[2943]: I0209 19:19:46.648799 2943 scope.go:115] "RemoveContainer" containerID="a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3" Feb 9 19:19:46.649616 env[1826]: time="2024-02-09T19:19:46.649540741Z" level=error msg="ContainerStatus for \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\": not found" Feb 9 19:19:46.650215 kubelet[2943]: E0209 19:19:46.649966 2943 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\": not found" containerID="a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3" Feb 9 19:19:46.650215 kubelet[2943]: I0209 19:19:46.650060 2943 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3} err="failed to get container status \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5d718d001e6d443b05e5edbcc29049bfe32aee7dc3b4298d0894f4aefc8ede3\": not found" Feb 9 19:19:46.650215 kubelet[2943]: I0209 19:19:46.650086 2943 scope.go:115] "RemoveContainer" containerID="5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c" Feb 9 19:19:46.650957 env[1826]: time="2024-02-09T19:19:46.650802673Z" level=error msg="ContainerStatus for \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\": not found" Feb 9 19:19:46.651864 kubelet[2943]: E0209 19:19:46.651615 2943 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\": not found" containerID="5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c" Feb 9 19:19:46.651864 kubelet[2943]: I0209 19:19:46.651690 2943 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c} err="failed to get container status \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5280425ce6ec97ec26e40803af6adff2188e38af98788a21ec3eb091fcc0055c\": not found" Feb 9 19:19:46.651864 kubelet[2943]: I0209 19:19:46.651714 2943 scope.go:115] "RemoveContainer" containerID="a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114" Feb 9 19:19:46.652187 env[1826]: time="2024-02-09T19:19:46.652096633Z" level=error msg="ContainerStatus for \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\": not found" Feb 9 19:19:46.652745 kubelet[2943]: E0209 19:19:46.652708 2943 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\": not found" containerID="a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114" Feb 9 19:19:46.652865 kubelet[2943]: I0209 19:19:46.652789 2943 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114} err="failed to get container status \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2eb628f1a1d94af2f4abda75e0d2fa1369046dee6ede173fe4b374f4bf3d114\": not found" Feb 9 19:19:46.745266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14-rootfs.mount: Deactivated successfully. Feb 9 19:19:46.745550 systemd[1]: var-lib-kubelet-pods-a42f38ea\x2d05c6\x2d46cb\x2d840e\x2d9694b7ed74a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz264f.mount: Deactivated successfully. Feb 9 19:19:46.745787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2-rootfs.mount: Deactivated successfully. Feb 9 19:19:46.746049 systemd[1]: var-lib-kubelet-pods-b34d45e3\x2daa3a\x2d4a63\x2d95d3\x2d3e80ae3551d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jrwg.mount: Deactivated successfully. Feb 9 19:19:46.746268 systemd[1]: var-lib-kubelet-pods-b34d45e3\x2daa3a\x2d4a63\x2d95d3\x2d3e80ae3551d8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:19:46.746488 systemd[1]: var-lib-kubelet-pods-b34d45e3\x2daa3a\x2d4a63\x2d95d3\x2d3e80ae3551d8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:19:47.084391 kubelet[2943]: I0209 19:19:47.084346 2943 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a42f38ea-05c6-46cb-840e-9694b7ed74a3 path="/var/lib/kubelet/pods/a42f38ea-05c6-46cb-840e-9694b7ed74a3/volumes" Feb 9 19:19:47.085424 kubelet[2943]: I0209 19:19:47.085385 2943 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b34d45e3-aa3a-4a63-95d3-3e80ae3551d8 path="/var/lib/kubelet/pods/b34d45e3-aa3a-4a63-95d3-3e80ae3551d8/volumes" Feb 9 19:19:47.130990 kubelet[2943]: E0209 19:19:47.130937 2943 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:19:47.640146 sshd[4833]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:47.645762 systemd-logind[1804]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:19:47.646198 systemd[1]: sshd@23-172.31.24.80:22-147.75.109.163:53044.service: Deactivated successfully. Feb 9 19:19:47.647700 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:19:47.649378 systemd-logind[1804]: Removed session 24. Feb 9 19:19:47.667772 systemd[1]: Started sshd@24-172.31.24.80:22-147.75.109.163:52132.service. Feb 9 19:19:47.845690 sshd[5006]: Accepted publickey for core from 147.75.109.163 port 52132 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:47.848439 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:47.856908 systemd-logind[1804]: New session 25 of user core. Feb 9 19:19:47.857950 systemd[1]: Started session-25.scope. Feb 9 19:19:49.616296 sshd[5006]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:49.627114 kubelet[2943]: I0209 19:19:49.620223 2943 setters.go:548] "Node became not ready" node="ip-172-31-24-80" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:19:49.620140141 +0000 UTC m=+133.878760193 LastTransitionTime:2024-02-09 19:19:49.620140141 +0000 UTC m=+133.878760193 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:19:49.622453 systemd[1]: sshd@24-172.31.24.80:22-147.75.109.163:52132.service: Deactivated successfully. Feb 9 19:19:49.625623 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:19:49.625695 systemd-logind[1804]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:19:49.636042 systemd-logind[1804]: Removed session 25. Feb 9 19:19:49.645148 systemd[1]: Started sshd@25-172.31.24.80:22-147.75.109.163:52140.service. Feb 9 19:19:49.713135 kubelet[2943]: I0209 19:19:49.713068 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:19:49.713295 kubelet[2943]: E0209 19:19:49.713164 2943 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" containerName="apply-sysctl-overwrites" Feb 9 19:19:49.713295 kubelet[2943]: E0209 19:19:49.713188 2943 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" containerName="mount-bpf-fs" Feb 9 19:19:49.713295 kubelet[2943]: E0209 19:19:49.713207 2943 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" containerName="cilium-agent" Feb 9 19:19:49.713295 kubelet[2943]: E0209 19:19:49.713225 2943 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" containerName="mount-cgroup" Feb 9 19:19:49.713295 kubelet[2943]: E0209 19:19:49.713241 2943 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a42f38ea-05c6-46cb-840e-9694b7ed74a3" containerName="cilium-operator" Feb 9 19:19:49.713295 kubelet[2943]: E0209 19:19:49.713258 2943 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" containerName="clean-cilium-state" Feb 9 19:19:49.713658 kubelet[2943]: I0209 19:19:49.713308 2943 memory_manager.go:346] "RemoveStaleState removing state" podUID="b34d45e3-aa3a-4a63-95d3-3e80ae3551d8" containerName="cilium-agent" Feb 9 19:19:49.713658 kubelet[2943]: I0209 19:19:49.713326 2943 memory_manager.go:346] "RemoveStaleState removing state" podUID="a42f38ea-05c6-46cb-840e-9694b7ed74a3" containerName="cilium-operator" Feb 9 19:19:49.803665 kubelet[2943]: I0209 19:19:49.803607 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-clustermesh-secrets\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.803870 kubelet[2943]: I0209 19:19:49.803720 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-bpf-maps\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.803870 kubelet[2943]: I0209 19:19:49.803791 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9j6c\" (UniqueName: \"kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-kube-api-access-v9j6c\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804019 kubelet[2943]: I0209 19:19:49.803954 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-xtables-lock\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804163 kubelet[2943]: I0209 19:19:49.804073 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-cgroup\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804245 kubelet[2943]: I0209 19:19:49.804227 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cni-path\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804425 kubelet[2943]: I0209 19:19:49.804389 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-etc-cni-netd\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804505 kubelet[2943]: I0209 19:19:49.804481 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-kernel\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804631 kubelet[2943]: I0209 19:19:49.804595 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-config-path\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804746 kubelet[2943]: I0209 19:19:49.804721 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-hubble-tls\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.804890 kubelet[2943]: I0209 19:19:49.804866 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-run\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.805004 kubelet[2943]: I0209 19:19:49.804980 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-hostproc\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.805118 kubelet[2943]: I0209 19:19:49.805094 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-lib-modules\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.805291 kubelet[2943]: I0209 19:19:49.805205 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-ipsec-secrets\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.805379 kubelet[2943]: I0209 19:19:49.805332 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-net\") pod \"cilium-x57v7\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " pod="kube-system/cilium-x57v7" Feb 9 19:19:49.874786 sshd[5017]: Accepted publickey for core from 147.75.109.163 port 52140 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:49.878596 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:49.886296 systemd-logind[1804]: New session 26 of user core. Feb 9 19:19:49.887439 systemd[1]: Started session-26.scope. Feb 9 19:19:50.042429 env[1826]: time="2024-02-09T19:19:50.042354386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x57v7,Uid:ce8deda7-b155-47ec-b75d-e28f87a236e8,Namespace:kube-system,Attempt:0,}" Feb 9 19:19:50.067883 env[1826]: time="2024-02-09T19:19:50.066109674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:19:50.067883 env[1826]: time="2024-02-09T19:19:50.066283155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:19:50.067883 env[1826]: time="2024-02-09T19:19:50.066364585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:19:50.067883 env[1826]: time="2024-02-09T19:19:50.066719394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7 pid=5044 runtime=io.containerd.runc.v2 Feb 9 19:19:50.182955 env[1826]: time="2024-02-09T19:19:50.182796442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x57v7,Uid:ce8deda7-b155-47ec-b75d-e28f87a236e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\"" Feb 9 19:19:50.193338 sshd[5017]: pam_unix(sshd:session): session closed for user core Feb 9 19:19:50.203500 systemd[1]: sshd@25-172.31.24.80:22-147.75.109.163:52140.service: Deactivated successfully. Feb 9 19:19:50.205036 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:19:50.213491 systemd-logind[1804]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:19:50.230098 env[1826]: time="2024-02-09T19:19:50.219609552Z" level=info msg="CreateContainer within sandbox \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:19:50.221018 systemd[1]: Started sshd@26-172.31.24.80:22-147.75.109.163:52150.service. Feb 9 19:19:50.222108 systemd-logind[1804]: Removed session 26. Feb 9 19:19:50.261858 env[1826]: time="2024-02-09T19:19:50.261765402Z" level=info msg="CreateContainer within sandbox \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593\"" Feb 9 19:19:50.264650 env[1826]: time="2024-02-09T19:19:50.262855124Z" level=info msg="StartContainer for \"eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593\"" Feb 9 19:19:50.373537 env[1826]: time="2024-02-09T19:19:50.373470419Z" level=info msg="StartContainer for \"eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593\" returns successfully" Feb 9 19:19:50.417402 sshd[5079]: Accepted publickey for core from 147.75.109.163 port 52150 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:19:50.419162 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:19:50.428812 systemd-logind[1804]: New session 27 of user core. Feb 9 19:19:50.431954 systemd[1]: Started session-27.scope. Feb 9 19:19:50.456183 env[1826]: time="2024-02-09T19:19:50.456122490Z" level=info msg="shim disconnected" id=eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593 Feb 9 19:19:50.456651 env[1826]: time="2024-02-09T19:19:50.456618393Z" level=warning msg="cleaning up after shim disconnected" id=eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593 namespace=k8s.io Feb 9 19:19:50.456793 env[1826]: time="2024-02-09T19:19:50.456764946Z" level=info msg="cleaning up dead shim" Feb 9 19:19:50.470660 env[1826]: time="2024-02-09T19:19:50.470604996Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5133 runtime=io.containerd.runc.v2\n" Feb 9 19:19:50.589398 env[1826]: time="2024-02-09T19:19:50.587876308Z" level=info msg="StopPodSandbox for \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\"" Feb 9 19:19:50.589398 env[1826]: time="2024-02-09T19:19:50.588241665Z" level=info msg="Container to stop \"eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:19:50.681701 env[1826]: time="2024-02-09T19:19:50.681617460Z" level=info msg="shim disconnected" id=04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7 Feb 9 19:19:50.682180 env[1826]: time="2024-02-09T19:19:50.682127918Z" level=warning msg="cleaning up after shim disconnected" id=04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7 namespace=k8s.io Feb 9 19:19:50.682329 env[1826]: time="2024-02-09T19:19:50.682300582Z" level=info msg="cleaning up dead shim" Feb 9 19:19:50.706509 env[1826]: time="2024-02-09T19:19:50.706360653Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5170 runtime=io.containerd.runc.v2\n" Feb 9 19:19:50.707611 env[1826]: time="2024-02-09T19:19:50.707558410Z" level=info msg="TearDown network for sandbox \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" successfully" Feb 9 19:19:50.707914 env[1826]: time="2024-02-09T19:19:50.707801093Z" level=info msg="StopPodSandbox for \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" returns successfully" Feb 9 19:19:50.818259 kubelet[2943]: I0209 19:19:50.818201 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-run\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.818931 kubelet[2943]: I0209 19:19:50.818269 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-net\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.818931 kubelet[2943]: I0209 19:19:50.818320 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9j6c\" (UniqueName: \"kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-kube-api-access-v9j6c\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.818931 kubelet[2943]: I0209 19:19:50.818360 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-bpf-maps\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.818931 kubelet[2943]: I0209 19:19:50.818399 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-cgroup\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.818931 kubelet[2943]: I0209 19:19:50.818439 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-lib-modules\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.818931 kubelet[2943]: I0209 19:19:50.818477 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-etc-cni-netd\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819302 kubelet[2943]: I0209 19:19:50.818521 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-config-path\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819302 kubelet[2943]: I0209 19:19:50.818562 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-hubble-tls\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819302 kubelet[2943]: I0209 19:19:50.818606 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-ipsec-secrets\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819302 kubelet[2943]: I0209 19:19:50.818656 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-clustermesh-secrets\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819302 kubelet[2943]: I0209 19:19:50.818693 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-xtables-lock\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819302 kubelet[2943]: I0209 19:19:50.818733 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-kernel\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819704 kubelet[2943]: I0209 19:19:50.818771 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cni-path\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819704 kubelet[2943]: I0209 19:19:50.818807 2943 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-hostproc\") pod \"ce8deda7-b155-47ec-b75d-e28f87a236e8\" (UID: \"ce8deda7-b155-47ec-b75d-e28f87a236e8\") " Feb 9 19:19:50.819704 kubelet[2943]: I0209 19:19:50.818927 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.819704 kubelet[2943]: I0209 19:19:50.818982 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.819704 kubelet[2943]: I0209 19:19:50.819021 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.821058 kubelet[2943]: I0209 19:19:50.820995 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.821346 kubelet[2943]: I0209 19:19:50.821295 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.822070 kubelet[2943]: I0209 19:19:50.821530 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.822070 kubelet[2943]: W0209 19:19:50.821795 2943 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ce8deda7-b155-47ec-b75d-e28f87a236e8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:19:50.827171 kubelet[2943]: I0209 19:19:50.827104 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:19:50.827171 kubelet[2943]: I0209 19:19:50.822005 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.828019 kubelet[2943]: I0209 19:19:50.827974 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.828250 kubelet[2943]: I0209 19:19:50.828222 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.828422 kubelet[2943]: I0209 19:19:50.828396 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:19:50.830058 kubelet[2943]: I0209 19:19:50.829993 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:19:50.830225 kubelet[2943]: I0209 19:19:50.830142 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-kube-api-access-v9j6c" (OuterVolumeSpecName: "kube-api-access-v9j6c") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "kube-api-access-v9j6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:19:50.833595 kubelet[2943]: I0209 19:19:50.833545 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:19:50.834515 kubelet[2943]: I0209 19:19:50.834468 2943 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ce8deda7-b155-47ec-b75d-e28f87a236e8" (UID: "ce8deda7-b155-47ec-b75d-e28f87a236e8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:19:50.919259 kubelet[2943]: I0209 19:19:50.919222 2943 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-bpf-maps\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.919492 kubelet[2943]: I0209 19:19:50.919472 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-cgroup\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.919628 kubelet[2943]: I0209 19:19:50.919609 2943 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-lib-modules\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.919763 kubelet[2943]: I0209 19:19:50.919743 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-config-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.919935 kubelet[2943]: I0209 19:19:50.919916 2943 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-hubble-tls\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920052 kubelet[2943]: I0209 19:19:50.920033 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-ipsec-secrets\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920179 kubelet[2943]: I0209 19:19:50.920160 2943 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-etc-cni-netd\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920334 kubelet[2943]: I0209 19:19:50.920281 2943 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce8deda7-b155-47ec-b75d-e28f87a236e8-clustermesh-secrets\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920464 kubelet[2943]: I0209 19:19:50.920445 2943 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-kernel\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920597 kubelet[2943]: I0209 19:19:50.920578 2943 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-xtables-lock\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920718 kubelet[2943]: I0209 19:19:50.920698 2943 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-hostproc\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920853 kubelet[2943]: I0209 19:19:50.920833 2943 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cni-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.920979 kubelet[2943]: I0209 19:19:50.920959 2943 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-host-proc-sys-net\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.921100 kubelet[2943]: I0209 19:19:50.921081 2943 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-v9j6c\" (UniqueName: \"kubernetes.io/projected/ce8deda7-b155-47ec-b75d-e28f87a236e8-kube-api-access-v9j6c\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.921237 kubelet[2943]: I0209 19:19:50.921218 2943 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce8deda7-b155-47ec-b75d-e28f87a236e8-cilium-run\") on node \"ip-172-31-24-80\" DevicePath \"\"" Feb 9 19:19:50.940155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7-shm.mount: Deactivated successfully. Feb 9 19:19:50.940440 systemd[1]: var-lib-kubelet-pods-ce8deda7\x2db155\x2d47ec\x2db75d\x2de28f87a236e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv9j6c.mount: Deactivated successfully. Feb 9 19:19:50.940676 systemd[1]: var-lib-kubelet-pods-ce8deda7\x2db155\x2d47ec\x2db75d\x2de28f87a236e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:19:50.940920 systemd[1]: var-lib-kubelet-pods-ce8deda7\x2db155\x2d47ec\x2db75d\x2de28f87a236e8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:19:50.941148 systemd[1]: var-lib-kubelet-pods-ce8deda7\x2db155\x2d47ec\x2db75d\x2de28f87a236e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:19:51.589197 kubelet[2943]: I0209 19:19:51.589165 2943 scope.go:115] "RemoveContainer" containerID="eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593" Feb 9 19:19:51.604378 env[1826]: time="2024-02-09T19:19:51.604322003Z" level=info msg="RemoveContainer for \"eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593\"" Feb 9 19:19:51.610473 env[1826]: time="2024-02-09T19:19:51.610414259Z" level=info msg="RemoveContainer for \"eb48ae82803f8f1da41370d22e960859dbbea275edd6cf68bc5b7ccc335e4593\" returns successfully" Feb 9 19:19:51.640228 kubelet[2943]: I0209 19:19:51.640181 2943 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:19:51.640536 kubelet[2943]: E0209 19:19:51.640509 2943 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce8deda7-b155-47ec-b75d-e28f87a236e8" containerName="mount-cgroup" Feb 9 19:19:51.640732 kubelet[2943]: I0209 19:19:51.640709 2943 memory_manager.go:346] "RemoveStaleState removing state" podUID="ce8deda7-b155-47ec-b75d-e28f87a236e8" containerName="mount-cgroup" Feb 9 19:19:51.725880 kubelet[2943]: I0209 19:19:51.725840 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-cilium-run\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.726156 kubelet[2943]: I0209 19:19:51.726134 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-cilium-cgroup\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.726320 kubelet[2943]: I0209 19:19:51.726299 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-host-proc-sys-kernel\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.726482 kubelet[2943]: I0209 19:19:51.726462 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b78cd84-d21d-4b64-b559-adc6607cbc2c-cilium-config-path\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.726634 kubelet[2943]: I0209 19:19:51.726613 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-etc-cni-netd\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.726785 kubelet[2943]: I0209 19:19:51.726764 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-xtables-lock\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.726982 kubelet[2943]: I0209 19:19:51.726962 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b78cd84-d21d-4b64-b559-adc6607cbc2c-clustermesh-secrets\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.727123 kubelet[2943]: I0209 19:19:51.727103 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgftq\" (UniqueName: \"kubernetes.io/projected/6b78cd84-d21d-4b64-b559-adc6607cbc2c-kube-api-access-dgftq\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.727270 kubelet[2943]: I0209 19:19:51.727249 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-cni-path\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.727576 kubelet[2943]: I0209 19:19:51.727554 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6b78cd84-d21d-4b64-b559-adc6607cbc2c-cilium-ipsec-secrets\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.727750 kubelet[2943]: I0209 19:19:51.727730 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-host-proc-sys-net\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.727927 kubelet[2943]: I0209 19:19:51.727906 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-hostproc\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.728066 kubelet[2943]: I0209 19:19:51.728046 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-lib-modules\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.728218 kubelet[2943]: I0209 19:19:51.728198 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b78cd84-d21d-4b64-b559-adc6607cbc2c-hubble-tls\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.728466 kubelet[2943]: I0209 19:19:51.728445 2943 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b78cd84-d21d-4b64-b559-adc6607cbc2c-bpf-maps\") pod \"cilium-9vprd\" (UID: \"6b78cd84-d21d-4b64-b559-adc6607cbc2c\") " pod="kube-system/cilium-9vprd" Feb 9 19:19:51.962117 env[1826]: time="2024-02-09T19:19:51.962025939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vprd,Uid:6b78cd84-d21d-4b64-b559-adc6607cbc2c,Namespace:kube-system,Attempt:0,}" Feb 9 19:19:51.992799 env[1826]: time="2024-02-09T19:19:51.992637528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:19:51.992799 env[1826]: time="2024-02-09T19:19:51.992740726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:19:51.993105 env[1826]: time="2024-02-09T19:19:51.992807289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:19:51.993545 env[1826]: time="2024-02-09T19:19:51.993362494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661 pid=5198 runtime=io.containerd.runc.v2 Feb 9 19:19:52.081635 env[1826]: time="2024-02-09T19:19:52.081579128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vprd,Uid:6b78cd84-d21d-4b64-b559-adc6607cbc2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\"" Feb 9 19:19:52.089836 env[1826]: time="2024-02-09T19:19:52.089731394Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:19:52.110409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818479534.mount: Deactivated successfully. Feb 9 19:19:52.121701 env[1826]: time="2024-02-09T19:19:52.121608809Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ad23b8d41b12a8e44c7385c14a187d4a49ebf6235f0870da6c0b86dbb1e6c43\"" Feb 9 19:19:52.123452 env[1826]: time="2024-02-09T19:19:52.123377046Z" level=info msg="StartContainer for \"6ad23b8d41b12a8e44c7385c14a187d4a49ebf6235f0870da6c0b86dbb1e6c43\"" Feb 9 19:19:52.132916 kubelet[2943]: E0209 19:19:52.132845 2943 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:19:52.237365 env[1826]: time="2024-02-09T19:19:52.232874920Z" level=info msg="StartContainer for \"6ad23b8d41b12a8e44c7385c14a187d4a49ebf6235f0870da6c0b86dbb1e6c43\" returns successfully" Feb 9 19:19:52.294141 env[1826]: time="2024-02-09T19:19:52.294078781Z" level=info msg="shim disconnected" id=6ad23b8d41b12a8e44c7385c14a187d4a49ebf6235f0870da6c0b86dbb1e6c43 Feb 9 19:19:52.294545 env[1826]: time="2024-02-09T19:19:52.294511709Z" level=warning msg="cleaning up after shim disconnected" id=6ad23b8d41b12a8e44c7385c14a187d4a49ebf6235f0870da6c0b86dbb1e6c43 namespace=k8s.io Feb 9 19:19:52.294683 env[1826]: time="2024-02-09T19:19:52.294656402Z" level=info msg="cleaning up dead shim" Feb 9 19:19:52.308075 env[1826]: time="2024-02-09T19:19:52.308019936Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5279 runtime=io.containerd.runc.v2\n" Feb 9 19:19:52.616953 env[1826]: time="2024-02-09T19:19:52.603486399Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:19:52.638069 env[1826]: time="2024-02-09T19:19:52.638004410Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b22818d01c58377540691a11fadb9ef06f7d968c449e7c371ce0634aaafc5fa1\"" Feb 9 19:19:52.641697 env[1826]: time="2024-02-09T19:19:52.641408958Z" level=info msg="StartContainer for \"b22818d01c58377540691a11fadb9ef06f7d968c449e7c371ce0634aaafc5fa1\"" Feb 9 19:19:52.750477 env[1826]: time="2024-02-09T19:19:52.750414410Z" level=info msg="StartContainer for \"b22818d01c58377540691a11fadb9ef06f7d968c449e7c371ce0634aaafc5fa1\" returns successfully" Feb 9 19:19:52.820973 env[1826]: time="2024-02-09T19:19:52.820912567Z" level=info msg="shim disconnected" id=b22818d01c58377540691a11fadb9ef06f7d968c449e7c371ce0634aaafc5fa1 Feb 9 19:19:52.821410 env[1826]: time="2024-02-09T19:19:52.821377341Z" level=warning msg="cleaning up after shim disconnected" id=b22818d01c58377540691a11fadb9ef06f7d968c449e7c371ce0634aaafc5fa1 namespace=k8s.io Feb 9 19:19:52.821544 env[1826]: time="2024-02-09T19:19:52.821516815Z" level=info msg="cleaning up dead shim" Feb 9 19:19:52.836843 env[1826]: time="2024-02-09T19:19:52.836762800Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5345 runtime=io.containerd.runc.v2\n" Feb 9 19:19:53.085889 kubelet[2943]: I0209 19:19:53.085185 2943 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ce8deda7-b155-47ec-b75d-e28f87a236e8 path="/var/lib/kubelet/pods/ce8deda7-b155-47ec-b75d-e28f87a236e8/volumes" Feb 9 19:19:53.610012 env[1826]: time="2024-02-09T19:19:53.609084048Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:19:53.645245 env[1826]: time="2024-02-09T19:19:53.645067488Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"282827cae0f84a9ac45752553b5cc8869a8fb07a457894991bb0987eabd1bce7\"" Feb 9 19:19:53.652603 env[1826]: time="2024-02-09T19:19:53.652540422Z" level=info msg="StartContainer for \"282827cae0f84a9ac45752553b5cc8869a8fb07a457894991bb0987eabd1bce7\"" Feb 9 19:19:53.775046 env[1826]: time="2024-02-09T19:19:53.774961780Z" level=info msg="StartContainer for \"282827cae0f84a9ac45752553b5cc8869a8fb07a457894991bb0987eabd1bce7\" returns successfully" Feb 9 19:19:53.820417 env[1826]: time="2024-02-09T19:19:53.820355587Z" level=info msg="shim disconnected" id=282827cae0f84a9ac45752553b5cc8869a8fb07a457894991bb0987eabd1bce7 Feb 9 19:19:53.820762 env[1826]: time="2024-02-09T19:19:53.820728108Z" level=warning msg="cleaning up after shim disconnected" id=282827cae0f84a9ac45752553b5cc8869a8fb07a457894991bb0987eabd1bce7 namespace=k8s.io Feb 9 19:19:53.820981 env[1826]: time="2024-02-09T19:19:53.820950583Z" level=info msg="cleaning up dead shim" Feb 9 19:19:53.835961 env[1826]: time="2024-02-09T19:19:53.835894256Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5404 runtime=io.containerd.runc.v2\n" Feb 9 19:19:53.977227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-282827cae0f84a9ac45752553b5cc8869a8fb07a457894991bb0987eabd1bce7-rootfs.mount: Deactivated successfully. Feb 9 19:19:54.613432 env[1826]: time="2024-02-09T19:19:54.613374832Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:19:54.646727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586236336.mount: Deactivated successfully. Feb 9 19:19:54.656465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248876472.mount: Deactivated successfully. Feb 9 19:19:54.657699 env[1826]: time="2024-02-09T19:19:54.657642059Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f011d3d7a4162dd0636e6a06066257842a90c5417abfed75e252bcec676e0b9\"" Feb 9 19:19:54.660487 env[1826]: time="2024-02-09T19:19:54.659121017Z" level=info msg="StartContainer for \"7f011d3d7a4162dd0636e6a06066257842a90c5417abfed75e252bcec676e0b9\"" Feb 9 19:19:54.769403 env[1826]: time="2024-02-09T19:19:54.769323002Z" level=info msg="StartContainer for \"7f011d3d7a4162dd0636e6a06066257842a90c5417abfed75e252bcec676e0b9\" returns successfully" Feb 9 19:19:54.876812 env[1826]: time="2024-02-09T19:19:54.876657213Z" level=info msg="shim disconnected" id=7f011d3d7a4162dd0636e6a06066257842a90c5417abfed75e252bcec676e0b9 Feb 9 19:19:54.877221 env[1826]: time="2024-02-09T19:19:54.877183522Z" level=warning msg="cleaning up after shim disconnected" id=7f011d3d7a4162dd0636e6a06066257842a90c5417abfed75e252bcec676e0b9 namespace=k8s.io Feb 9 19:19:54.877387 env[1826]: time="2024-02-09T19:19:54.877359606Z" level=info msg="cleaning up dead shim" Feb 9 19:19:54.896785 env[1826]: time="2024-02-09T19:19:54.896727420Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5461 runtime=io.containerd.runc.v2\n" Feb 9 19:19:54.977307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f011d3d7a4162dd0636e6a06066257842a90c5417abfed75e252bcec676e0b9-rootfs.mount: Deactivated successfully. Feb 9 19:19:55.620295 env[1826]: time="2024-02-09T19:19:55.620239276Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:19:55.662427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301918689.mount: Deactivated successfully. Feb 9 19:19:55.676482 env[1826]: time="2024-02-09T19:19:55.676379030Z" level=info msg="CreateContainer within sandbox \"79449482111e7898abaeb9b3cba8448fdf04669b1ac1043ab2ecf4db25eb6661\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e\"" Feb 9 19:19:55.678076 env[1826]: time="2024-02-09T19:19:55.678027173Z" level=info msg="StartContainer for \"44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e\"" Feb 9 19:19:55.788699 env[1826]: time="2024-02-09T19:19:55.788635296Z" level=info msg="StartContainer for \"44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e\" returns successfully" Feb 9 19:19:56.549851 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 19:19:56.985188 systemd[1]: run-containerd-runc-k8s.io-44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e-runc.5XUrTa.mount: Deactivated successfully. Feb 9 19:19:59.220411 systemd[1]: run-containerd-runc-k8s.io-44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e-runc.tYY4bv.mount: Deactivated successfully. Feb 9 19:20:00.589435 (udev-worker)[6033]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:20:00.597454 (udev-worker)[6035]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:20:00.600054 systemd-networkd[1600]: lxc_health: Link UP Feb 9 19:20:00.625139 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:20:00.624102 systemd-networkd[1600]: lxc_health: Gained carrier Feb 9 19:20:01.586873 systemd[1]: run-containerd-runc-k8s.io-44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e-runc.ztM7Lu.mount: Deactivated successfully. Feb 9 19:20:01.867614 systemd-networkd[1600]: lxc_health: Gained IPv6LL Feb 9 19:20:01.999900 kubelet[2943]: I0209 19:20:01.999855 2943 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9vprd" podStartSLOduration=10.999781813 pod.CreationTimestamp="2024-02-09 19:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:19:56.651316537 +0000 UTC m=+140.909936613" watchObservedRunningTime="2024-02-09 19:20:01.999781813 +0000 UTC m=+146.258401877" Feb 9 19:20:03.899757 systemd[1]: run-containerd-runc-k8s.io-44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e-runc.PYiiPy.mount: Deactivated successfully. Feb 9 19:20:08.562346 systemd[1]: run-containerd-runc-k8s.io-44ccd4578a0836004fa7cf66b4edf39b5e4b8fe71e08fc6db101030d15495e4e-runc.pDRQcu.mount: Deactivated successfully. Feb 9 19:20:08.722202 sshd[5079]: pam_unix(sshd:session): session closed for user core Feb 9 19:20:08.728396 systemd[1]: sshd@26-172.31.24.80:22-147.75.109.163:52150.service: Deactivated successfully. Feb 9 19:20:08.731174 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:20:08.732715 systemd-logind[1804]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:20:08.736292 systemd-logind[1804]: Removed session 27. Feb 9 19:20:23.241247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22f04d62905e1e369dfc4a1d0a695dff3d24c76364d3c11e1b0683a339f4ac25-rootfs.mount: Deactivated successfully. Feb 9 19:20:23.286605 env[1826]: time="2024-02-09T19:20:23.286541625Z" level=info msg="shim disconnected" id=22f04d62905e1e369dfc4a1d0a695dff3d24c76364d3c11e1b0683a339f4ac25 Feb 9 19:20:23.287428 env[1826]: time="2024-02-09T19:20:23.287389573Z" level=warning msg="cleaning up after shim disconnected" id=22f04d62905e1e369dfc4a1d0a695dff3d24c76364d3c11e1b0683a339f4ac25 namespace=k8s.io Feb 9 19:20:23.287554 env[1826]: time="2024-02-09T19:20:23.287525950Z" level=info msg="cleaning up dead shim" Feb 9 19:20:23.301416 env[1826]: time="2024-02-09T19:20:23.301356506Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:20:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6171 runtime=io.containerd.runc.v2\n" Feb 9 19:20:23.703235 kubelet[2943]: I0209 19:20:23.703202 2943 scope.go:115] "RemoveContainer" containerID="22f04d62905e1e369dfc4a1d0a695dff3d24c76364d3c11e1b0683a339f4ac25" Feb 9 19:20:23.707949 env[1826]: time="2024-02-09T19:20:23.707897075Z" level=info msg="CreateContainer within sandbox \"c1c06408ebf498a1a270ebfa247dcd07fbb3b1e765cafa8e66468c82ab9122b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:20:23.733757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602116418.mount: Deactivated successfully. Feb 9 19:20:23.754656 env[1826]: time="2024-02-09T19:20:23.754575011Z" level=info msg="CreateContainer within sandbox \"c1c06408ebf498a1a270ebfa247dcd07fbb3b1e765cafa8e66468c82ab9122b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"56c59198f51a47329d9f40a11c843ea678360ae47903d3e89ef2157685c4e80c\"" Feb 9 19:20:23.755640 env[1826]: time="2024-02-09T19:20:23.755596211Z" level=info msg="StartContainer for \"56c59198f51a47329d9f40a11c843ea678360ae47903d3e89ef2157685c4e80c\"" Feb 9 19:20:23.868012 env[1826]: time="2024-02-09T19:20:23.867935259Z" level=info msg="StartContainer for \"56c59198f51a47329d9f40a11c843ea678360ae47903d3e89ef2157685c4e80c\" returns successfully" Feb 9 19:20:28.062365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feb8cf7704b1f9c4bf92cc50cd777759330b40b4ff00e93b327daef4cc0c5bee-rootfs.mount: Deactivated successfully. Feb 9 19:20:28.078226 env[1826]: time="2024-02-09T19:20:28.078166056Z" level=info msg="shim disconnected" id=feb8cf7704b1f9c4bf92cc50cd777759330b40b4ff00e93b327daef4cc0c5bee Feb 9 19:20:28.079276 env[1826]: time="2024-02-09T19:20:28.079236203Z" level=warning msg="cleaning up after shim disconnected" id=feb8cf7704b1f9c4bf92cc50cd777759330b40b4ff00e93b327daef4cc0c5bee namespace=k8s.io Feb 9 19:20:28.079448 env[1826]: time="2024-02-09T19:20:28.079419343Z" level=info msg="cleaning up dead shim" Feb 9 19:20:28.095211 env[1826]: time="2024-02-09T19:20:28.095156650Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:20:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6232 runtime=io.containerd.runc.v2\n" Feb 9 19:20:28.719794 kubelet[2943]: I0209 19:20:28.719763 2943 scope.go:115] "RemoveContainer" containerID="feb8cf7704b1f9c4bf92cc50cd777759330b40b4ff00e93b327daef4cc0c5bee" Feb 9 19:20:28.724131 env[1826]: time="2024-02-09T19:20:28.724077271Z" level=info msg="CreateContainer within sandbox \"018822abfcd92f7d3354be909fcf607caa77a134081bda87c2f9e088425b8cf8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:20:28.744176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482114505.mount: Deactivated successfully. Feb 9 19:20:28.757996 env[1826]: time="2024-02-09T19:20:28.757935474Z" level=info msg="CreateContainer within sandbox \"018822abfcd92f7d3354be909fcf607caa77a134081bda87c2f9e088425b8cf8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2e1c0563437415ae45216d500095604bf68b43dfc87ef84083eaa07e17dfcdc1\"" Feb 9 19:20:28.758994 env[1826]: time="2024-02-09T19:20:28.758945466Z" level=info msg="StartContainer for \"2e1c0563437415ae45216d500095604bf68b43dfc87ef84083eaa07e17dfcdc1\"" Feb 9 19:20:28.877221 env[1826]: time="2024-02-09T19:20:28.877160455Z" level=info msg="StartContainer for \"2e1c0563437415ae45216d500095604bf68b43dfc87ef84083eaa07e17dfcdc1\" returns successfully" Feb 9 19:20:30.112990 kubelet[2943]: E0209 19:20:30.112922 2943 controller.go:189] failed to update lease, error: Put "https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:20:36.375351 env[1826]: time="2024-02-09T19:20:36.375293830Z" level=info msg="StopPodSandbox for \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\"" Feb 9 19:20:36.376037 env[1826]: time="2024-02-09T19:20:36.375484360Z" level=info msg="TearDown network for sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" successfully" Feb 9 19:20:36.376037 env[1826]: time="2024-02-09T19:20:36.375566658Z" level=info msg="StopPodSandbox for \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" returns successfully" Feb 9 19:20:36.376877 env[1826]: time="2024-02-09T19:20:36.376768027Z" level=info msg="RemovePodSandbox for \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\"" Feb 9 19:20:36.377046 env[1826]: time="2024-02-09T19:20:36.376847145Z" level=info msg="Forcibly stopping sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\"" Feb 9 19:20:36.377046 env[1826]: time="2024-02-09T19:20:36.376994258Z" level=info msg="TearDown network for sandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" successfully" Feb 9 19:20:36.381772 env[1826]: time="2024-02-09T19:20:36.381707761Z" level=info msg="RemovePodSandbox \"26dc19ca15794162928acf42601a8d37fe92b9bff225d45ce60fd1fd8c110ef2\" returns successfully" Feb 9 19:20:36.382948 env[1826]: time="2024-02-09T19:20:36.382902493Z" level=info msg="StopPodSandbox for \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\"" Feb 9 19:20:36.383286 env[1826]: time="2024-02-09T19:20:36.383217670Z" level=info msg="TearDown network for sandbox \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" successfully" Feb 9 19:20:36.383424 env[1826]: time="2024-02-09T19:20:36.383389840Z" level=info msg="StopPodSandbox for \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" returns successfully" Feb 9 19:20:36.384026 env[1826]: time="2024-02-09T19:20:36.383981254Z" level=info msg="RemovePodSandbox for \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\"" Feb 9 19:20:36.384269 env[1826]: time="2024-02-09T19:20:36.384211757Z" level=info msg="Forcibly stopping sandbox \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\"" Feb 9 19:20:36.384492 env[1826]: time="2024-02-09T19:20:36.384458160Z" level=info msg="TearDown network for sandbox \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" successfully" Feb 9 19:20:36.389954 env[1826]: time="2024-02-09T19:20:36.389899185Z" level=info msg="RemovePodSandbox \"dbec25a337599817685b95a3afa8bd1351ee358a1289f58282a478a07b0d3b14\" returns successfully" Feb 9 19:20:36.390706 env[1826]: time="2024-02-09T19:20:36.390666308Z" level=info msg="StopPodSandbox for \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\"" Feb 9 19:20:36.391046 env[1826]: time="2024-02-09T19:20:36.390981942Z" level=info msg="TearDown network for sandbox \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" successfully" Feb 9 19:20:36.391187 env[1826]: time="2024-02-09T19:20:36.391153775Z" level=info msg="StopPodSandbox for \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" returns successfully" Feb 9 19:20:36.391933 env[1826]: time="2024-02-09T19:20:36.391884141Z" level=info msg="RemovePodSandbox for \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\"" Feb 9 19:20:36.392076 env[1826]: time="2024-02-09T19:20:36.391942583Z" level=info msg="Forcibly stopping sandbox \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\"" Feb 9 19:20:36.392169 env[1826]: time="2024-02-09T19:20:36.392067075Z" level=info msg="TearDown network for sandbox \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" successfully" Feb 9 19:20:36.396830 env[1826]: time="2024-02-09T19:20:36.396748921Z" level=info msg="RemovePodSandbox \"04000916b5f71cf394e37db50d76e43a868cd0b69b21d338e3cbf95a789096c7\" returns successfully" Feb 9 19:20:40.113354 kubelet[2943]: E0209 19:20:40.113303 2943 controller.go:189] failed to update lease, error: Put "https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)